How to Use ImageMagic to Resize Images

Optimized Conversion:

# This is an optimized resizing command to convert PNG to JPG with minimal loss of image quality
outputPath=./resized
outputWidth=1200
mkdir $outputPath
originalFormat=png
newFormat=jpg
mogrify -path $outputPath -format $newFormat -resize $outputWidth -filter Triangle -define filter:support=2 -unsharp 0.25x0.25+8+0.065 -dither None -posterize 136 -quality 82 -define jpeg:fancy-upsampling=off -define png:compression-filter=5 -define png:compression-level=9 -define png:compression-strategy=1 -define png:exclude-chunk=all -interlace none -colorspace sRGB -strip *.$originalFormat

The 1-liner Command:

# This command would pickup all PNG images of current directory and create resized versions of them in the ./resized folder
mkdir ./resized
mogrify -path ./resized -resize 1200 *.png

This would happen if the resized directory doesn’t exist

kim@kim-linux:~$ cd ~
kim@kim-linux:/Scans$ mogrify -path ./resized -resize 1200 *.png 
mogrify-im6.q16: unable to open image `./resized//Military-Payment-Certificate-One-Dollar-Series-661-Backside.png': No such file or directory @ error/blob.c/OpenBlob/2874.
mogrify-im6.q16: WriteBlob Failed `./resized//Military-Payment-Certificate-One-Dollar-Series-661-Backside.png' @ error/png.c/MagickPNGErrorHandler/1641.

Uhhuh. NMI received for unknown reason 3d on CPU 4

Problem:

Message from syslogd@linux03 at Aug 31 04:28:21 ...
 kernel:[273033.123489] Uhhuh. NMI received for unknown reason 3d on CPU 4.

Message from syslogd@linux03 at Aug 31 04:28:21 ...
 kernel:[273033.123491] Do you have a strange power saving mode enabled?

Message from syslogd@linux03 at Aug 31 04:28:21 ...
 kernel:[273033.123491] Dazed and confused, but trying to continue

Solution: nothing… Error seems transient.

Ping Command’s First Packet Toward LDAP Server(s) Takes 2 Seconds to Start

Case 1: Are DNS servers working?
  • dig returns results right away => defined dns servers are working
  • dig returns results with a 2+ seconds delay or timeout => defined dns servers are NOT working

Recommendations:

  1. Test configuring client to use a different DNS server
    dig @dnsServer1.kimconnect.com ldapServerName
  2. Verify that routing and firewall rules are passing traffic from client to DNS servers
  3. Cleanup invalid DNS records in AD
Case 2: Is localhost able to cache hardware address?
  • apr -a command returns results right away, and the ldap server IP mac address is present => ARP is working fine
  • apr -a command takes awhile to populate => indication that localhost arp table is having issues, so it’s not caching mac to ip for fast lookups

Recommendations:

a. Add a static arp entry into localhost

Command:

arp -s ip-address-of-ldap-server hardware-address-of-ldap-server
# Example:
sudo arp -s 10.10.10.10 aa:11:bb:22:cc:44

# How to reverse the change:
sudo arp --delete 10.10.10.10

# How to check the ARP Table:
sudo arp -avn # more verbose
sudo arp -n # simple view

b. Clear ARP cash & DNS cache

ip -s -s neigh flush all
arp -n
service nscd restart

How To Install Graylog in a Kubernetes Cluster Using Helm Charts

The following narrative is based on the assumption that a Kubernetes (current stable version 20.10) has been setup using MetalLB Ingress controller. This should also work with Traefik or other load balancers.

# Create a separate namespace for this project
kubectl create namespace graylog

# Change into the graylog namespace
kubectl config set-context --current --namespace=graylog
kubectl config view --minify | grep namespace: # Validate it

# Optional: delete previous test instances of graylog that have been deployed via Helm
helm delete "graylog" --namespace graylog
kubectl delete pvc --namespace graylog --all

# How to switch execution context back to the 'default' namespace
kubectl config set-context --current --namespace=default

# Optional: installing mongdb prior to Graylog
helm install "mongodb" bitnami/mongodb --namespace "graylog" \
  --set persistence.size=100Gi
# Sample output:
NAME: mongodb
LAST DEPLOYED: Thu Aug 29 00:07:36 2021
NAMESPACE: graylog
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
MongoDB® can be accessed on the following DNS name(s) and ports from within your cluster:
    mongodb.graylog.svc.cluster.local
To get the root password run:
    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace graylog mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
To connect to your database, create a MongoDB® client container:
    kubectl run --namespace graylog mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.8-debian-10-r9 --command -- bash
Then, run the following command:
    mongo admin --host "mongodb" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
To connect to your database from outside the cluster execute the following commands:
    kubectl port-forward --namespace graylog svc/mongodb 27017:27017 &
    mongo --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD

# REQUIRED: Pre-install ElasticSearch version 7.10 as highest being supported by Graylog 4.1.3
# Source: https://artifacthub.io/packages/helm/elastic/elasticsearch/7.10.2
helm repo add elastic https://helm.elastic.co
helm repo update
helm install elasticsearch elastic/elasticsearch --namespace "graylog" \
  --set imageTag=7.10.2 \
  --set data.persistence.size=100Gi
# Sample output:
NAME: elasticsearch
LAST DEPLOYED: Sun Aug 29 04:35:30 2021
NAMESPACE: graylog
STATUS: deployed
REVISION: 1
NOTES:
1. Watch all cluster members come up.
  $ kubectl get pods --namespace=graylog -l app=elasticsearch-master -w
2. Test cluster health using Helm test.
  $ helm test elasticsearch

# Installation of Graylog with mongodb bundled, while integrating with a pre-deployed elasticSearch instance
#
# This install command assumes that the protocol preference for transporting logs is TCP
# Also, the current helm chart does not allow mixing TCP with UDP; therefore, this approach is conveniently
# matching business requirements where a reliable transmission TCP protocol is necessary to record security data.
helm install graylog kongz/graylog --namespace "graylog" \
  --set graylog.image.repository="graylog/graylog:4.1.3-1" \
  --set graylog.persistence.size=200Gi \
  --set graylog.service.type=LoadBalancer \
  --set graylog.service.port=80 \
  --set graylog.service.loadBalancerIP=10.10.100.88 \
  --set graylog.service.externalTrafficPolicy=Local \
  --set graylog.service.ports[0].name=gelf \
  --set graylog.service.ports[0].port=12201 \
  --set graylog.service.ports[1].name=syslog \
  --set graylog.service.ports[1].port=514 \
  --set graylog.rootPassword="SOMEPASSWORD" \
  --set tags.install-elasticsearch=false \
  --set graylog.elasticsearch.version=7 \
  --set graylog.elasticsearch.hosts=http://elasticsearch-master.graylog.svc.cluster.local:9200

# Optional: add these lines if the mongodb component has been installed separately
  --set tags.install-mongodb=false \
  --set graylog.mongodb.uri=mongodb://mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local:27017/graylog?replicaSet=rs0 \

# Moreover, the graylog chart version 1.8.4 doesn't seem to set externalTrafficPolicy as expected.
# Set externalTrafficPolicy = local to preserve source client IPs
kubectl patch svc graylog-web -n graylog -p '{"spec":{"externalTrafficPolicy":"Local"}}'

# Sometimes, the static EXTERNAL-IP would be assigned to graylog-master, where graylog-web EXTERNAL-IP would
# remain in the status of <pending> indefinitely.
# Workaround: set services to share a single external IP
kubectl patch svc graylog-web -p '{"metadata":{"annotations":{"metallb.universe.tf/allow-shared-ip":"graylog"}}}'
kubectl patch svc graylog-master -p '{"metadata":{"annotations":{"metallb.universe.tf/allow-shared-ip":"graylog"}}}'
kubectl patch svc graylog-master -n graylog -p '{"spec": {"type": "LoadBalancer", "externalIPs":["10.10.100.88"]}}'
kubectl patch svc graylog-web -n graylog -p '{"spec": {"type": "LoadBalancer", "externalIPs":["10.10.100.88"]}}'

# Test sending logs to server via TCP
graylog-server=graylog.kimconnect.com
echo -e '{"version": "1.1","host":"kimconnect.com","short_message":"Short message","full_message":"This is a\n\nlong message","level":9000,"_user_id":9000,"_ip_address":"1.1.1.1","_location":"LAX"}\0' | nc -w 1 $graylog-server 514

# Test via UDP
graylog-server=graylog.kimconnect.com
echo -e '{"version": "1.1","host":"kimconnect.com","short_message":"Short message","full_message":"This is a\n\nlong message","level":9000,"_user_id":9000,"_ip_address":"1.1.1.1","_location":"LAX"}\0' | nc -u -w 1 $graylog-server 514

# Optional: graylog Ingress
cat > graylog-ingress.yaml <<EOF
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: graylog-ingress
  namespace: graylog
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # set these for SSL
    # ingress.kubernetes.io/rewrite-target: /
    # acme http01
    # acme.cert-manager.io/http01-edit-in-place: "true"
    # acme.cert-manager.io/http01-ingress-class: "true"
    # kubernetes.io/tls-acme: "true"  
spec:
  rules:
  - host: graylog.kimconnect.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: graylog-web
            port:
              number: 80
      - path: /
        pathType: Prefix
        backend:
          service:
            name: graylog-web
            port:
              number: 12201
      - path: /
        pathType: Prefix
        backend:
          service:
            name: graylog-web
            port:
              number: 514              
EOF
kubectl apply -f graylog-ingress.yaml

Troubleshooting Notes:

# Sample commands to patch graylog service components
kubectl patch svc graylog-web -p '{"spec":{"type":"LoadBalancer"}}' # Convert ClusterIP to LoadBalancer to gain ingress
kubectl patch svc graylog-web -p '{"spec":{"externalIPs":["10.10.100.88"]}}' # Add externalIPs
kubectl patch svc graylog-master -n graylog -p '{"spec":{"loadBalancerIP":""}}' # Remove loadBalancer IPs
kubectl patch svc graylog-master -n graylog -p '{"status":{"loadBalancer":{"ingress":[]}}}' # Purge ingress IPs
kubectl patch svc graylog-web -n graylog -p '{"status":{"loadBalancer":{"ingress":[{"ip":"10.10.100.88"}]}}}'
kubectl patch svc graylog-web -n graylog -p '{"status":{"loadBalancer":{"ingress":[]}}}'

# Alternative solution: mixing UDP with TCP
# The current chart version only allows this when service Type = ClusterIP (default)
helm upgrade graylog kongz/graylog --namespace "graylog" \
  --set graylog.image.repository="graylog/graylog:4.1.3-1" \
  --set graylog.persistence.size=200Gi \
  --set graylog.service.externalTrafficPolicy=Local \
  --set graylog.service.port=80 \
  --set graylog.service.ports[0].name=gelf \
  --set graylog.service.ports[0].port=12201 \
  --set graylog.service.ports[0].protocol=UDP \
  --set graylog.service.ports[1].name=syslog \
  --set graylog.service.ports[1].port=514 \
  --set graylog.service.ports[1].protocol=UDP \
  --set graylog.rootPassword="SOMEPASSWORD" \
  --set tags.install-elasticsearch=false \
  --set graylog.elasticsearch.version=7 \
  --set graylog.elasticsearch.hosts=http://elasticsearch-master.graylog.svc.cluster.local:9200

# Error message occurs when combing TCP with UDP; hence, a ClusterIP must be specified
Error: UPGRADE FAILED: cannot patch "graylog-web" with kind Service: Service "graylog-web" is invalid: spec.ports: Invalid value: []core.ServicePort{core.ServicePort{Name:"graylog", Protocol:"TCP", AppProtocol:(*string)(nil), Port:80, TargetPort:intstr.IntOrString{Type:0, IntVal:9000, StrVal:""}, NodePort:32518}, core.ServicePort{Name:"gelf", Protocol:"UDP", AppProtocol:(*string)(nil), Port:12201, TargetPort:intstr.IntOrString{Type:0, IntVal:12201, StrVal:""}, NodePort:0}, core.ServicePort{Name:"gelf2", Protocol:"TCP", AppProtocol:(*string)(nil), Port:12222, TargetPort:intstr.IntOrString{Type:0, IntVal:12222, StrVal:""}, NodePort:31523}, core.ServicePort{Name:"syslog", Protocol:"TCP", AppProtocol:(*string)(nil), Port:514, TargetPort:intstr.IntOrString{Type:0, IntVal:514, StrVal:""}, NodePort:31626}}: may not contain more than 1 protocol when type is 'LoadBalancer'

# Set array type value instead of string
Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Service.spec.externalIPs): invalid type for io.k8s.api.core.v1.ServiceSpec.externalIPs: got "string", expected "array"
# Solution:
--set "array={a,b,c}" OR --set service[0].port=80

# Graylog would not start and this was the error:
com.github.joschi.jadconfig.ValidationException: Parent directory /usr/share/graylog/data/journal for Node ID file at /usr/share/graylog/data/journal/node-id is not writable

# Workaround
graylogData=/mnt/k8s/graylog-journal-graylog-0-pvc-04dd9c7f-a771-4041-b549-5b4664de7249/
chown -fR 1100:1100 $graylogData

NAME: graylog
LAST DEPLOYED: Thu Aug 29 03:26:00 2021
NAMESPACE: graylog
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To connect to your Graylog server:
1. Get the application URL by running these commands:
  Graylog Web Interface uses JavaScript to get detail of each node. The client JavaScript cannot communicate to node when service type is `ClusterIP`.
  If you want to access Graylog Web Interface, you need to enable Ingress.
    NOTE: Port Forward does not work with web interface.
2. The Graylog root users
  echo "User: admin"
  echo "Password: $(kubectl get secret --namespace graylog graylog -o "jsonpath={.data['graylog-password-secret']}" | base64 --decode)"
To send logs to graylog:
  NOTE: If `graylog.input` is empty, you cannot send logs from other services. Please make sure the value is not empty.
        See https://github.com/KongZ/charts/tree/main/charts/graylog#input for detail

k describe pod graylog-0
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  11m                   default-scheduler  0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  11m                   default-scheduler  0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         11m                   default-scheduler  Successfully assigned graylog/graylog-0 to linux03
  Normal   Pulled            11m                   kubelet            Container image "alpine" already present on machine
  Normal   Created           11m                   kubelet            Created container setup
  Normal   Started           10m                   kubelet            Started container setup
  Normal   Started           4m7s (x5 over 10m)    kubelet            Started container graylog-server
  Warning  Unhealthy         3m4s (x4 over 9m14s)  kubelet            Readiness probe failed: Get "http://172.16.90.197:9000/api/system/lbstatus": dial tcp 172.16.90.197:9000: connect: connection refused
  Normal   Pulled            2m29s (x6 over 10m)   kubelet            Container image "graylog/graylog:4.1.3-1" already present on machine
  Normal   Created           2m19s (x6 over 10m)   kubelet            Created container graylog-server
  Warning  BackOff           83s (x3 over 2m54s)   kubelet            Back-off restarting failed container

Readiness probe failed: Get http://api/system/lbstatus: dial tcp 172.16.90.197:9000: connect: connection refused

# Set external IP
# This only works on LoadBalancer, not ClusterIP
# kubectl patch svc graylog-web -p '{"spec":{"externalIPs":["10.10.100.88"]}}'
# kubectl patch svc graylog-master -p '{"spec":{"externalIPs":[]}}'

kubectl patch service graylog-web --type='json' -p='[{"op": "add", "path": "/metadata/annotations/kubernetes.io~1ingress.class", "value":"nginx"}]'

# Set annotation to allow shared IPs between 2 different services
kubectl annotate service graylog-web metallb.universe.tf/allow-shared-ip=graylog
kubectl annotate service graylog-master metallb.universe.tf/allow-shared-ip=graylog

metadata:
  name: $serviceName-tcp
  annotations:
    metallb.universe.tf/address-pool: default
    metallb.universe.tf/allow-shared-ip: psk

# Ingress
appName=graylog
domain=graylog.kimconnect.com
deploymentName=graylog-web
containerPort=9000
cat <<EOF> $appName-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: $appName-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # ingress.kubernetes.io/rewrite-target: /
    # acme http01
    # acme.cert-manager.io/http01-edit-in-place: "true"
    # acme.cert-manager.io/http01-ingress-class: "true"
    # kubernetes.io/tls-acme: "true"
spec:
  rules:
  - host: $domain
    http:
      paths:
      - backend:
          service:
            name: $deploymentName
            port:
              number: 9000
        path: /
        pathType: Prefix
EOF
kubectl apply -f $appName-ingress.yaml

# delete pvc's
namespace=graylog
kubectl delete pvc data-graylog-elasticsearch-data-0 -n $namespace
kubectl delete pvc data-graylog-elasticsearch-master-0 -n $namespace
kubectl delete pvc datadir-graylog-mongodb-0 -n $namespace
kubectl delete pvc journal-graylog-0 -n $namespace

# delete all pvc's in namespace the easier way
namespace=graylog
kubectl get pvc -n $namespace | awk '$1 {print$1}' | while read vol; do kubectl delete pvc/${vol} -n $namespace; done

2021-08-20 20:19:41,048 INFO    [cluster] - Exception in monitor thread while connecting to server mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local:27017 - {}
com.mongodb.MongoSocketException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local
        at com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:211) ~[graylog.jar:?]
        at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:75) ~[graylog.jar:?]
        at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[graylog.jar:?]
        at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[graylog.jar:?]
        at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) [graylog.jar:?]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_302]
Caused by: java.net.UnknownHostException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local
        at java.net.InetAddress.getAllByName0(InetAddress.java:1281) ~[?:1.8.0_302]
        at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[?:1.8.0_302]
        at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[?:1.8.0_302]
        at com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:203) ~[graylog.jar:?]
        ... 5 more

2021-08-20 20:19:42,981 INFO    [cluster] - No server chosen by com.mongodb.client.internal.MongoClientDelegate$1@69419d59 from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local}, caused by {java.net.UnknownHostException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local}}]}. Waiting for 30000 ms before timing out - {}

# Alternative version - that doesn't work
# helm repo add groundhog2k https://groundhog2k.github.io/helm-charts/
# helm install graylog groundhog2k/graylog --namespace "graylog" \
#   --set image.tag=4.1.3-1 \
#   --set settings.http.publishUri='http://127.0.0.1:9000/' \
#   --set service.type=LoadBalancer \
#   --set service.loadBalancerIP=192.168.100.88 \
#   --set elasticsearch.enabled=true \
#   --set mongodb.enabled=true

# helm upgrade graylog groundhog2k/graylog --namespace "graylog" \
#   --set image.tag=4.1.3-1 \
#   --set settings.http.publishUri=http://localhost:9000/ \
#   --set service.externalTrafficPolicy=Local \
#   --set service.type=LoadBalancer \
#   --set service.loadBalancerIP=192.168.100.88 \
#   --set elasticsearch.enabled=true \
#   --set mongodb.enabled=true \
#   --set storage.className=nfs-client \
#   --set storage.requestedSize=200Gi

# kim@linux01:~$ k logs graylog-0
# 2021-08-29 03:47:09,345 ERROR: org.graylog2.bootstrap.CmdLineTool - Invalid configuration
# com.github.joschi.jadconfig.ValidationException: Couldn't run validator method
#         at com.github.joschi.jadconfig.JadConfig.invokeValidatorMethods(JadConfig.java:227) ~[graylog.jar:?]
#         at com.github.joschi.jadconfig.JadConfig.process(JadConfig.java:100) ~[graylog.jar:?]
#         at org.graylog2.bootstrap.CmdLineTool.processConfiguration(CmdLineTool.java:420) [graylog.jar:?]
#         at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:236) [graylog.jar:?]
#         at org.graylog2.bootstrap.Main.main(Main.java:45) [graylog.jar:?]
# Caused by: java.lang.reflect.InvocationTargetException
#         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_302]
#         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_302]
#         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_302]
#         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_302]
#         at com.github.joschi.jadconfig.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:53) ~[graylog.jar:?]
#         at com.github.joschi.jadconfig.JadConfig.invokeValidatorMethods(JadConfig.java:221) ~[graylog.jar:?]
#         ... 4 more
# Caused by: java.lang.IllegalArgumentException: URLDecoder: Illegal hex characters in escape (%) pattern - For input string: "!s"
#         at java.net.URLDecoder.decode(URLDecoder.java:194) ~[?:1.8.0_302]
#         at com.mongodb.ConnectionString.urldecode(ConnectionString.java:1035) ~[graylog.jar:?]
#         at com.mongodb.ConnectionString.urldecode(ConnectionString.java:1030) ~[graylog.jar:?]
#         at com.mongodb.ConnectionString.<init>(ConnectionString.java:336) ~[graylog.jar:?]
#         at com.mongodb.MongoClientURI.<init>(MongoClientURI.java:256) ~[graylog.jar:?]
#         at org.graylog2.configuration.MongoDbConfiguration.getMongoClientURI(MongoDbConfiguration.java:59) ~[graylog.jar:?]
#         at org.graylog2.configuration.MongoDbConfiguration.validate(MongoDbConfiguration.java:64) ~[graylog.jar:?]
#         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_302]
#         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_302]
#         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_302]
#         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_302]
#         at com.github.joschi.jadconfig.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:53) ~[graylog.jar:?]
#         at com.github.joschi.jadconfig.JadConfig.invokeValidatorMethods(JadConfig.java:221) ~[graylog.jar:?]

How to configure Ubiquiti EdgeRouter to send logs to a Syslog Server

Method 1: using text editor

# Edit the syslog config
sudo vi /etc/rsyslog.d/vyatta-log.conf

# Change the @ = udp symbol to @@ = tcp
# add :PORTNUMBER after node name or IP if necessary
admin@EdgeRouter-4:~$ cat /etc/rsyslog.d/vyatta-log.conf
*.err	@graylog.kimconnect.com
*.notice;local7.debug	-/var/log/messages

Method 2: use sed to update texts

# Change from udp to tcp
sudo sed 's/@/@@/' -i /etc/rsyslog.d/vyatta-log.conf
cat /etc/rsyslog.d/vyatta-log.conf

# Change from tcp to udp
sudo sed 's/@@/@/' -i /etc/rsyslog.d/vyatta-log.conf
cat /etc/rsyslog.d/vyatta-log.conf

# Restart syslogd
sudo service rsyslog restart

How To Configure Alternative Storage for a Kubernetes (K8s) Worker Node

The below illustration is assuming that one has a local RAID mount being added to a worker node due to it’s lack of local storage to run kubelets and docker containers

# On K8s controller, remove worker node
kubectl drain node linux03 --ignore-damonsets
kubectl delete node linux03

# On the worker node uninstall docker & kubelet
sudo apt-get remove docker-ce docker-ce-cli containerd.io kubelet

# Check the health of its RAID mount /dev/md0
mdadm --detail /dev/md0

# Sample expected output:
           Version : 1.2
     Creation Time : Fri Aug 13 23:46:13 2021
        Raid Level : raid10
        Array Size : 1953257472 (1862.77 GiB 2000.14 GB)
     Used Dev Size : 976628736 (931.39 GiB 1000.07 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent
     Intent Bitmap : Internal
       Update Time : Sat Aug 28 23:39:08 2021
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0
            Layout : near=2
        Chunk Size : 512K
Consistency Policy : bitmap
              Name : linux03:0  (local to host linux03)
              UUID : 
            Events : 1750
    Number   Major   Minor   RaidDevice State
       0       8       97        0      active sync set-A   /dev/sdg1
       1       8       81        1      active sync set-B   /dev/sdf1
       2       8       17        2      active sync set-A   /dev/sdb1
       3       8        1        3      active sync set-B   /dev/sda1

# Check the logical mount
mount=/nfs-share
df -hT -P $mount

# Sample expected output:
root@linux03:/home/kimconnect# df -hT -P $mount
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md0       ext4  1.8T   77M  1.7T   1% /nfs-share

# Prepare docker & kubelet redirected links
source1=/nfs-share/linux03/docker
source2=/nfs-share/linux03/kubelet
destinationdirectory=/var/lib/
sudo mkdir -p $source1
sudo mkdir -p $source2

# Optional: remove existing docker & kubelet directories
rm -rf /var/lib/kubelet
rm -rf /var/lib/docker

# Create links
sudo ln -sfn $source1 $destinationdirectory
sudo ln -sfn $source2 $destinationdirectory

# Verify
ls -la /var/lib

# Expected output:
root@linux03:/home/kim# ls /var/lib -la
total 180
drwxr-xr-x 45 root      root      4096 Aug 28 00:38 .
drwxr-xr-x 13 root      root      4096 Feb  1  2021 ..
drwxr-xr-x  4 root      root      4096 Feb  1  2021 AccountsService
drwxr-xr-x  5 root      root      4096 Aug 28 00:24 apt
drwxr-xr-x  2 root      root      4096 Sep 10  2020 boltd
drwxr-xr-x  2 root      root      4096 Aug 27 21:21 calico
drwxr-xr-x  8 root      root      4096 Aug 28 00:34 cloud
drwxr-xr-x  4 root      root      4096 Aug 27 23:52 cni
drwxr-xr-x  2 root      root      4096 Aug 27 19:38 command-not-found
drwx--x--x 11 root      root      4096 Aug 27 20:24 containerd
drwxr-xr-x  2 root      root      4096 Aug 27 19:57 dbus
drwxr-xr-x  2 root      root      4096 Apr 10  2020 dhcp
lrwxrwxrwx  1 root      root        25 Aug 27 23:24 docker -> /nfs-share/linux03/docker
drwxr-xr-x  3 root      root      4096 Aug 27 21:15 dockershim
drwxr-xr-x  7 root      root      4096 Aug 28 00:24 dpkg
drwxr-xr-x  3 root      root      4096 Feb  1  2021 fwupd
drwxr-xr-x  2 root      root      4096 Apr 20  2020 git
drwxr-xr-x  4 root      root      4096 Aug 27 19:39 grub
drwxr-xr-x  2 root      root      4096 Aug 27 19:51 initramfs-tools
lrwxrwxrwx  1 root      root        26 Aug 28 00:38 kubelet -> /nfs-share/linux03/kubelet
### truncated for brevity ###

# Reinstall docker & kubernetes
version=1.20.10-00
apt-get install -qy --allow-downgrades --allow-change-held-packages kubeadm=$version kubelet=$version kubectl=$version docker-ce docker-ce-cli containerd.io nfs-common
apt-mark hold kubeadm kubelet kubectl

I may consider making another illustration for NFS mounts. It may not be necessary as the instructions would be mostly similar. The difference would be that one must ensure that the worker node automatically mounts the nfs share upon reboots. The command to make symbolic soft-links would be the same.

Ubuntu: Auto Updates Configuration

Prepare the Linux OS:

# Install auto-update packages
sudo apt install -y unattended-upgrades apt-listchanges

# Configure unattended updates
sudo dpkg-reconfigure -plow unattended-upgrades
Answering ‘Yes’ to allow unattended-upgrades

Edit the auto-upgrade file:
sudo vim /etc/apt/apt.conf.d/50unattended-upgrades

// Automatically upgrade packages from these (origin:archive) pairs
//
// Note that in Ubuntu security updates may pull in new dependencies
// from non-security sources (e.g. chromium). By allowing the release
// pocket these get automatically pulled in.
Unattended-Upgrade::Allowed-Origins {
	"${distro_id}:${distro_codename}";
	"${distro_id}:${distro_codename}-security";
	// Extended Security Maintenance; doesn't necessarily exist for
	// every release and this system may not have it installed, but if
	// available, the policy for updates is such that unattended-upgrades
	// should also install from here by default.
	"${distro_id}ESMApps:${distro_codename}-apps-security";
	"${distro_id}ESM:${distro_codename}-infra-security";
//	"${distro_id}:${distro_codename}-updates";
//	"${distro_id}:${distro_codename}-proposed";
//	"${distro_id}:${distro_codename}-backports";
};

// Python regular expressions, matching packages to exclude from upgrading
Unattended-Upgrade::Package-Blacklist {
    // The following matches all packages starting with linux-
//  "linux-";

    // Use $ to explicitely define the end of a package name. Without
    // the $, "libc6" would match all of them.
//  "libc6$";
//  "libc6-dev$";
//  "libc6-i686$";

    // Special characters need escaping
//  "libstdc\+\+6$";

    // The following matches packages like xen-system-amd64, xen-utils-4.1,
    // xenstore-utils and libxenstore3.0
//  "(lib)?xen(store)?";

    // For more information about Python regular expressions, see
    // https://docs.python.org/3/howto/regex.html
};

// This option controls whether the development release of Ubuntu will be
// upgraded automatically. Valid values are "true", "false", and "auto".
Unattended-Upgrade::DevRelease "auto";

// This option allows you to control if on a unclean dpkg exit
// unattended-upgrades will automatically run 
//   dpkg --force-confold --configure -a
// The default is true, to ensure updates keep getting installed
//Unattended-Upgrade::AutoFixInterruptedDpkg "true";

// Split the upgrade into the smallest possible chunks so that
// they can be interrupted with SIGTERM. This makes the upgrade
// a bit slower but it has the benefit that shutdown while a upgrade
// is running is possible (with a small delay)
//Unattended-Upgrade::MinimalSteps "true";

// Install all updates when the machine is shutting down
// instead of doing it in the background while the machine is running.
// This will (obviously) make shutdown slower.
// Unattended-upgrades increases logind's InhibitDelayMaxSec to 30s.
// This allows more time for unattended-upgrades to shut down gracefully
// or even install a few packages in InstallOnShutdown mode, but is still a
// big step back from the 30 minutes allowed for InstallOnShutdown previously.
// Users enabling InstallOnShutdown mode are advised to increase
// InhibitDelayMaxSec even further, possibly to 30 minutes.
//Unattended-Upgrade::InstallOnShutdown "false";

// Send email to this address for problems or packages upgrades
// If empty or unset then no email is sent, make sure that you
// have a working mail setup on your system. A package that provides
// 'mailx' must be installed. E.g. "user@example.com"
//Unattended-Upgrade::Mail "";

// Set this value to one of:
//    "always", "only-on-error" or "on-change"
// If this is not set, then any legacy MailOnlyOnError (boolean) value
// is used to chose between "only-on-error" and "on-change"
//Unattended-Upgrade::MailReport "on-change";

// Remove unused automatically installed kernel-related packages
// (kernel images, kernel headers and kernel version locked tools).
//Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";

// Do automatic removal of newly unused dependencies after the upgrade
//Unattended-Upgrade::Remove-New-Unused-Dependencies "true";

// Do automatic removal of unused packages after the upgrade
// (equivalent to apt-get autoremove)
//Unattended-Upgrade::Remove-Unused-Dependencies "false";

// Automatically reboot *WITHOUT CONFIRMATION* if
//  the file /var/run/reboot-required is found after the upgrade
//Unattended-Upgrade::Automatic-Reboot "false";

// Automatically reboot even if there are users currently logged in
// when Unattended-Upgrade::Automatic-Reboot is set to true
//Unattended-Upgrade::Automatic-Reboot-WithUsers "true";

// If automatic reboot is enabled and needed, reboot at the specific
// time instead of immediately
//  Default: "now"
//Unattended-Upgrade::Automatic-Reboot-Time "02:00";

// Use apt bandwidth limit feature, this example limits the download
// speed to 70kb/sec
//Acquire::http::Dl-Limit "70";

// Enable logging to syslog. Default is False
// Unattended-Upgrade::SyslogEnable "false";

// Specify syslog facility. Default is daemon
// Unattended-Upgrade::SyslogFacility "daemon";

// Download and install upgrades only on AC power
// (i.e. skip or gracefully stop updates on battery)
// Unattended-Upgrade::OnlyOnACPower "true";

// Download and install upgrades only on non-metered connection
// (i.e. skip or gracefully stop updates on a metered connection)
// Unattended-Upgrade::Skip-Updates-On-Metered-Connections "true";

// Verbose logging
// Unattended-Upgrade::Verbose "false";

// Print debugging information both in unattended-upgrades and
// in unattended-upgrade-shutdown
// Unattended-Upgrade::Debug "false";

// Allow package downgrade if Pin-Priority exceeds 1000
// Unattended-Upgrade::Allow-downgrade "false";

Verify:

sudo unattended-upgrades --dry-run

Linux: Testing Disk Speed

Below is an exercise in comparing 2 different media: external USB drive vs a SD card

# Perform the WRITE test on USB Disk
outputFile=/media/data/test.img
blocksize=1G
numberOfBlocks=1
dd if=/dev/zero of=$outputFile bs=$blocksize count=$numberOfBlocks oflag=dsync
rm $outputFile

# Perform the WRITE test on SD Card
outputFile=/media/kim/2849-2EAD/test.img
blocksize=1G
numberOfBlocks=1
dd if=/dev/zero of=$outputFile bs=$blocksize count=$numberOfBlocks oflag=dsync
rm $outputFile

# Read test must be performed subsequent to clearing the data being stored in RAM by switching the disk1 vs disk2 test files...

# Perform READ test on USB Disk
outputFile=/media/data/test.img
blocksize=1G
numberOfBlocks=1
echo 3 | sudo tee /proc/sys/vm/drop_caches
dd if=$outputFile of=/dev/null bs=$blocksize count=$numberOfBlocks

# Perform the READ test on SD Card
outputFile=/media/kim/2849-2EAD/test.img
blocksize=1G
numberOfBlocks=1
echo 3 | sudo tee /proc/sys/vm/drop_caches
dd if=$outputFile of=/dev/null bs=$blocksize count=$numberOfBlocks

Results:

# USB External Drive Write Speed
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 30.8095 s, 34.9 MB/s

# USB External Drive Read Speed
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 26.5893 s, 40.4 MB/s

# SD Card Write Speed
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 167.147 s, 6.4 MB/s

# SD Card Read Speed
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 68.2176 s, 15.7 MB/s

Linux: RSYNC Examples

Following are a few practical uses of this command:

# Copying from a Local Directory to another
# Important: to set source and destination directories, one must include the trailing slash '/' to identify object as a directory
source=/media/kim/2849-2EAD/doanclub_kimconnect/
destination=/media/data/doanclub_kimconnect/
rsync -a $source $destination

# Copying in to a directory that does NOT yet exist
# Note: these extra switches would display progress
localsource=/var/lib/docker/
localdestination=/nfs-share/linux03/docker/
mkdir -p $localdestination && rsync -avhP --stats --progress $localsource $localdestination

# Copying with exclusions
source=/media/kim/2849-2EAD/
excludeDirectory=.Trash-1000
destination=/media/data/
rsync -a --exclude=$excludeDirectory $source $destination

# Copying from a Local Directory to a remote machine
rsync -a /media/data/ remote_user@remote_host_or_ip:/media/data/

# Copying from Local to Remote using uncommon port
rsync -a -e "ssh -p 22222" /media/data/ remote_user@remote_host_or_ip:/media/data/

# Copying from Remote to Local
rsync -a remote_user@remote_host_or_ip:/media/data/ /media/data/

# Copying from Remote to Local using a 'screen' session (terminal multiplexer)
rsync -a -P remote_user@remote_host_or_ip:/media/data/ /media/data/

Linux: Commands to Add a New Disk

# Step 1: create partitioning table (gpt or msdos/mbr)
device=/dev/sdc
sudo parted $device mklabel gpt

# Step 2: Create primary partition and reserve the whole disk toward it
device=/dev/sdc
sudo parted -a opt $device mkpart primary ext4 0% 100%

# Step 3: Creating an ext4 file system
partition=/dev/sdc1
label=data
mkfs.ext4 -L $label $partition

# Optional: Change disk label
# partition=/dev/sdc1
# label=data
# e2label $partition data

# Step 4: Mount the new partition
mount=/media/data
partition=/dev/sdc1
mkdir -p $mount # create new mount point
mount $partition $mount # mounting partition toward new target mount point

# Add this line to /ect/fstab
LABEL=data /media/data ext4 defaults 0 2

# Reload mounts
mount -a

Problem: NextCloud Would Not Start Due to Versioning Variance

This issue has occurred when NextCloud has been upgraded after deployment. Its source docker container may specify an older version as compared to the running instance. This discrepancy will cause the pod to fail to re-create or start as a new container as show below:

# Pod scheduling status yields 'Error'
kimconnect@k8sController:~$ k get pod
NAME                                              READY   STATUS    RESTARTS   AGE
clamav-0                                          1/1     Running   0          6d23h
collabora-collabora-code-69d74c979f-jp4p2         1/1     Running   0          6d19h
nextcloud-6cf9c65d85-42dx7                        1/2     Error     1          6s
nextcloud-db-postgresql-0                         1/1     Running   0          7d1h

# Further examination of the problem...
kimconnect@k8sController:~$ k describe pod nextcloud-6cf9c65d85-l9b99
Name:         nextcloud-6cf9c65d85-l9b99
Namespace:    default
Priority:     0
Node:         workder05/10.10.100.95
Start Time:   Fri, 20 Aug 2021 23:48:23 +0000
Labels:       app.kubernetes.io/component=app
              app.kubernetes.io/instance=nextcloud
              app.kubernetes.io/name=nextcloud
              pod-template-hash=6cf9c65d85
Annotations:  cni.projectcalico.org/podIP: 172.16.90.126/32
              cni.projectcalico.org/podIPs: 172.16.90.126/32
Status:       Running
IP:           172.16.90.126
IPs:
  IP:           172.16.90.126
Controlled By:  ReplicaSet/nextcloud-6cf9c65d85
Containers:
  nextcloud:
    Container ID:   docker://4c202d2155dea39739db815feae271fb8f14438f44092049f3d55c70fbf819c0
    Image:          nextcloud:stable-fpm
    Image ID:       docker-pullable://nextcloud@sha256:641b1dc10b681e1245c6f5d6d366fa1cd7e018ff787cf690c1aa372ddc108671
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 20 Aug 2021 23:54:03 +0000
      Finished:     Fri, 20 Aug 2021 23:54:03 +0000
    Ready:          False
    Restart Count:  6
    Environment:
      POSTGRES_HOST:              nextcloud-db-postgresql.default.svc.cluster.local
      POSTGRES_DB:                nextcloud
      POSTGRES_USER:              <set to the key 'db-username' in secret 'nextcloud-db'>      Optional: false
      POSTGRES_PASSWORD:          <set to the key 'db-password' in secret 'nextcloud-db'>      Optional: false
      NEXTCLOUD_ADMIN_USER:       <set to the key 'nextcloud-username' in secret 'nextcloud'>  Optional: false
      NEXTCLOUD_ADMIN_PASSWORD:   <set to the key 'nextcloud-password' in secret 'nextcloud'>  Optional: false
      NEXTCLOUD_TRUSTED_DOMAINS:  kimconnect.com
      NEXTCLOUD_DATA_DIR:         /var/www/html/data
    Mounts:
      /usr/local/etc/php-fpm.d/memory_limit from nextcloud-phpconfig (rw,path="memory_limit")
      /usr/local/etc/php-fpm.d/post_max_size from nextcloud-phpconfig (rw,path="post_max_size")
      /usr/local/etc/php-fpm.d/upload_max_filesize from nextcloud-phpconfig (rw,path="upload_max_filesize")
      /usr/local/etc/php-fpm.d/upload_max_size from nextcloud-phpconfig (rw,path="upload_max_size")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bdhxv (ro)
      /var/www/ from nextcloud-data (rw,path="root")
      /var/www/html from nextcloud-data (rw,path="html")
      /var/www/html/config from nextcloud-data (rw,path="config")
      /var/www/html/custom_apps from nextcloud-data (rw,path="custom_apps")
      /var/www/html/data from nextcloud-data (rw,path="data")
      /var/www/html/themes from nextcloud-data (rw,path="themes")
      /var/www/tmp from nextcloud-data (rw,path="tmp")
  nextcloud-nginx:
    Container ID:   docker://1fae573d1a0591058ad55f939b4762f01c7a5f6e7275d2348ff1bd287e077fe5
    Image:          nginx:alpine
    Image ID:       docker-pullable://nginx@sha256:e20c21e530f914fb6a95a755924b1cbf71f039372e94ac5ddcf8c3b386a44615
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 20 Aug 2021 23:48:26 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/nginx/nginx.conf from nextcloud-nginx-config (rw,path="nginx.conf")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bdhxv (ro)
      /var/www/ from nextcloud-data (rw,path="root")
      /var/www/html from nextcloud-data (rw,path="html")
      /var/www/html/config from nextcloud-data (rw,path="config")
      /var/www/html/custom_apps from nextcloud-data (rw,path="custom_apps")
      /var/www/html/data from nextcloud-data (rw,path="data")
      /var/www/html/themes from nextcloud-data (rw,path="themes")
      /var/www/tmp from nextcloud-data (rw,path="tmp")
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  nextcloud-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nextcloud-claim
    ReadOnly:   false
  nextcloud-phpconfig:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nextcloud-phpconfig
    Optional:  false
  nextcloud-nginx-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nextcloud-nginxconfig
    Optional:  false
  default-token-bdhxv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-bdhxv
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/nextcloud-6cf9c65d85-l9b99 to linux05
  Normal   Pulled     10m                  kubelet            Container image "nginx:alpine" already present on machine
  Normal   Created    10m                  kubelet            Created container nextcloud-nginx
  Normal   Started    10m                  kubelet            Started container nextcloud-nginx
  Normal   Created    9m47s (x4 over 10m)  kubelet            Created container nextcloud
  Normal   Started    9m46s (x4 over 10m)  kubelet            Started container nextcloud
  Normal   Pulled     8m55s (x5 over 10m)  kubelet            Container image "nextcloud:stable-fpm" already present on machine
  Warning  BackOff    18s (x51 over 10m)   kubelet            Back-off restarting failed container

# Checking the logs
kimconnect@k8sController:~$ k logs nextcloud-6cf9c65d85-l9b99 nextcloud
Can't start Nextcloud because the version of the data (21.0.4.1) is higher than the docker image version (20.0.8.1) and downgrading is not supported. Are you sure you have pulled the newest image version?

Solution:

# a. Create a backup copy of version.php
  sudo mount $nfsServer:/volume1/nextcloud /mnt/nextcloud
  cd /mnt/nextcloud/html
  cp version.php version.php.bak

# b. Edit the version.php file with this content
  vim version.php
########
# <?php
# $OC_Version = array(21,0,4,1); # change this value to array(20,0,8,1)
# $OC_VersionString = '21.0.4'; # change this value to '20.0.8'
# $OC_Edition = '';
# $OC_Channel = 'stable';
# $OC_VersionCanBeUpgradedFrom = array (
#   'nextcloud' =>
#   array (
#     '20.0' => true,
#     '21.0' => true,
#   ),
#   'owncloud' =>
#   array (
#     '10.5' => true,
#   ),
# );
# $OC_Build = '2021-08-03T15:44:43+00:00 c52fea0b16690b492f6c4175e1ae71d488936244';
# $vendor = 'nextcloud';
########

# c. Recreate the failed pod and verify that it's in 'running status'

kimconnect@k8sController:~$ k delete pod nextcloud-6cf9c65d85-l9b99
pod "nextcloud-6cf9c65d85-l9b99" deleted
kimconnect@k8sController:~$ k get pod
NAME                                              READY   STATUS    RESTARTS   AGE
clamav-0                                          1/1     Running   0          6d23h
collabora-collabora-code-69d74c979f-jp4p2         1/1     Running   0          6d19h
nextcloud-6cf9c65d85-dmg2s                        2/2     Running   0          17s
nextcloud-db-postgresql-0                         1/1     Running   0          7d1h

# d. Revert changes to version.php

cd /mnt/nextcloud/html
mv version.php version.php.old
mv version.php.bak version.php

NextCloud Container PHP Memory Issue as Deployed via Kubernetes

Most common example:

Below is a raw text paste of an exercise in resolving the issue of PHP running out of memory – this normally happens with default Docker/Kubernetes containerized deployments

# Entering container bash shell command line interface
admin@controller01:~$ containerName=nextcloud-6cf9c65d85-45kll
admin@controller01:~$ kubectl exec --stdin --tty $containerName -- /bin/bash
Defaulting container name to nextcloud.
Use 'kubectl describe pod/nextcloud-6cf9c65d85-45kll -n default' to see all of the containers in this pod.

# Attempting to run a php command that has not been added to environmental variables
root@nextcloud-6cf9c65d85-45kll:/var/www/html# occ db:add-missing-indices
bash: occ: command not found

# Calling command from its known path
root@nextcloud-6cf9c65d85-45kll:/var/www/html# /var/www/html/occ db:add-missing-indices
Console has to be executed with the user that owns the file config/config.php
Current user id: 0
Owner id of config.php: 33
Try adding 'sudo -u #33' to the beginning of the command (without the single quotes)
If running with 'docker exec' try adding the option '-u 33' to the docker command (without the single quotes)

# Change into user id 33 (www-data)
root@nextcloud-6cf9c65d85-45kll:/var/www/html# chsh -s /bin/bash www-data
root@nextcloud-6cf9c65d85-45kll:/var/www/html# su - www-data

# Retry running the php command to experience the out-of-memory error
www-data@nextcloud-6cf9c65d85-45kll:~$ /var/www/html/occ db:add-missing-indices

Fatal error: Allowed memory size of 2097152 bytes exhausted (tried to allocate 438272 bytes) in /var/www/html/3rdparty/composer/autoload_real.php on line 37

# Workaround the memory error by invoking php with no memory limits
www-data@nextcloud-6cf9c65d85-45kll:~$ php -d memory_limit=-1 -f  /var/www/html/occ db:add-missing-indices
Check indices of the share table.
Check indices of the filecache table.
Adding additional size index to the filecache table, this can take some time...
Filecache table updated successfully.
Check indices of the twofactor_providers table.
Check indices of the login_flow_v2 table.
Check indices of the whats_new table.
Check indices of the cards table.
Check indices of the cards_properties table.
Check indices of the calendarobjects_props table.
Check indices of the schedulingobjects table.
Check indices of the oc_properties table.

Another example in calling cron.php as privileged user:

# Executing command within Docker container
command=php -d memory_limit=-1 -f /var/www/html/cron.php
docker exec nextcloud su - www-data -s /bin/bash -c '$command'

Generalized workarounds:

# Option 1:
docker exec nextcloud su - www-data -s /bin/bash -c 'PHP_MEMORY_LIMIT=-1 php -f /var/www/html/cron.php'

# Option 2:
docker exec nextcloud su --whitelist-environment=PHP_MEMORY_LIMIT - www-data -s /bin/bash -c 'PHP_MEMORY_LIMIT=-1 php -f /var/www/html/cron.php'

# Option 3:
Add PHP_MEMORY_LIMIT=2G to /etc/enviroment

How to Install NFS Server on Ubuntu 21.04

Installing NFS Server

# Include prerequisites
sudo apt update -y # Run updates prior to installing
sudo apt install nfs-kernel-server # Install NFS Server
sudo systemctl enable nfs-server # Set nfs-server to load on startups
sudo systemctl status nfs-server # Check its status

# check server status
root@worker03:/home/brucelee# sudo systemctl status nfs-server
● nfs-server.service - NFS server and services
     Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
     Active: active (exited) since Fri 2021-08-13 04:25:50 UTC; 18s ago
    Process: 2731 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
    Process: 2732 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
   Main PID: 2732 (code=exited, status=0/SUCCESS)

Aug 13 04:25:49 linux03 systemd[1]: Starting NFS server and services...
Aug 13 04:25:50 linux03 systemd[1]: Finished NFS server and services.

# Prepare an empty folder
sudo su # enter root
nfsShare=/nfs-share
mkdir $nfsShare # create folder if it doesn't exist
chown nobody: $nfsShare
chmod -R 777 $nfsShare # not recommended for production

# Edit the nfs server share configs
vim /etc/exports
# add these lines
/nfs-share x.x.x.x/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)

# Export directory and make it available
sudo exportfs -rav

# Verify nfs shares
sudo exportfs -v

# Enable ingress for subnet
sudo ufw allow from x.x.x.x/24 to any port nfs

# Check firewall status - inactive firewall is fine for testing
root@worker03:/home/brucelee# sudo ufw status
Status: inactive

How to Install NFS Client on Ubuntu 21.04

# Install prerequisites
sudo apt update -y
sudo apt install nfs-common

# Mount the nfs share
remoteShare=server.ip.here:/nfs-share
localMount=/mnt/testmount
sudo mkdir -p $localMount
sudo mount $remoteShare $localMount

# Unmount
sudo umount $localMount

How to Check System Temperature on Ubuntu 21.04

# Install sensors
sudo apt update -y
sudo apt install lm-sensors hddtemp -y

# Setup detection
sudo sensors-detect

# Spot check sensors
sensors

# Check hard drive temperatures
sudo hddtemp disk /dev/sd[abcdeg]

Sample output:

adminguy@worker02:~$ sudo sensors-detect
# sensors-detect version 3.6.0
# System: Micro-Star International Co., Ltd MS-7B07 [1.0]
# Board: Micro-Star International Co., Ltd A320M PRO-VH PLUS(MS-7B07)
# Kernel: 5.11.0-25-generic x86_64
# Processor: AMD Ryzen 5 2400G with Radeon Vega Graphics (23/17/0)

This program will help you determine which kernel modules you need
to load to use lm_sensors most effectively. It is generally safe
and recommended to accept the default answers to all questions,
unless you know what you're doing.

Some south bridges, CPUs or memory controllers contain embedded sensors.
Do you want to scan for them? This is totally safe. (YES/no): YES
Module cpuid loaded successfully.
Silicon Integrated Systems SIS5595...                       No
VIA VT82C686 Integrated Sensors...                          No
VIA VT8231 Integrated Sensors...                            No
AMD K8 thermal sensors...                                   No
AMD Family 10h thermal sensors...                           No
AMD Family 11h thermal sensors...                           No
AMD Family 12h and 14h thermal sensors...                   No
AMD Family 15h thermal sensors...                           No
AMD Family 16h thermal sensors...                           No
AMD Family 17h thermal sensors...                           Success!
    (driver `k10temp')
AMD Family 15h power sensors...                             No
AMD Family 16h power sensors...                             No
Hygon Family 18h thermal sensors...                         No
Intel digital thermal sensor...                             No
Intel AMB FB-DIMM thermal sensor...                         No
Intel 5500/5520/X58 thermal sensor...                       No
VIA C7 thermal sensor...                                    No
VIA Nano thermal sensor...                                  No

Some Super I/O chips contain embedded sensors. We have to write to
standard I/O ports to probe them. This is usually safe.
Do you want to scan for Super I/O sensors? (YES/no):
Probing for Super-I/O at 0x2e/0x2f
Trying family `National Semiconductor/ITE'...               No
Trying family `SMSC'...                                     No
Trying family `VIA/Winbond/Nuvoton/Fintek'...               No
Trying family `ITE'...                                      No
Probing for Super-I/O at 0x4e/0x4f
Trying family `National Semiconductor/ITE'...               No
Trying family `SMSC'...                                     No
Trying family `VIA/Winbond/Nuvoton/Fintek'...               Yes
Found `Nuvoton NCT6795D Super IO Sensors'                   Success!
    (address 0xa20, driver `nct6775')

Some systems (mainly servers) implement IPMI, a set of common interfaces
through which system health data may be retrieved, amongst other things.
We first try to get the information from SMBIOS. If we don't find it
there, we have to read from arbitrary I/O ports to probe for such
interfaces. This is normally safe. Do you want to scan for IPMI
interfaces? (YES/no):
Probing for `IPMI BMC KCS' at 0xca0...                      No
Probing for `IPMI BMC SMIC' at 0xca8...                     No

Some hardware monitoring chips are accessible through the ISA I/O ports.
We have to write to arbitrary I/O ports to probe them. This is usually
safe though. Yes, you do have ISA I/O ports even if you do not have any
ISA slots! Do you want to scan the ISA I/O ports? (yes/NO):

Lastly, we can probe the I2C/SMBus adapters for connected hardware
monitoring devices. This is the most risky part, and while it works
reasonably well on most systems, it has been reported to cause trouble
on some systems.
Do you want to probe the I2C/SMBus adapters now? (YES/no):
Using driver `i2c-piix4' for device 0000:00:14.0: AMD KERNCZ SMBus

Next adapter: SMBus PIIX4 adapter port 0 at 0b00 (i2c-0)
Do you want to scan it? (yes/NO/selectively):

Next adapter: SMBus PIIX4 adapter port 2 at 0b00 (i2c-1)
Do you want to scan it? (yes/NO/selectively):

Next adapter: SMBus PIIX4 adapter port 1 at 0b20 (i2c-2)
Do you want to scan it? (yes/NO/selectively):

Next adapter: AMDGPU DM i2c hw bus 0 (i2c-3)
Do you want to scan it? (yes/NO/selectively):

Next adapter: AMDGPU DM i2c hw bus 1 (i2c-4)
Do you want to scan it? (yes/NO/selectively):

Next adapter: AMDGPU DM i2c hw bus 2 (i2c-5)
Do you want to scan it? (yes/NO/selectively):

Next adapter: AMDGPU DM aux hw bus 1 (i2c-6)
Do you want to scan it? (yes/NO/selectively):


Now follows a summary of the probes I have just done.
Just press ENTER to continue:

Driver `k10temp' (autoloaded):
  * Chip `AMD Family 17h thermal sensors' (confidence: 9)

Driver `nct6775':
  * ISA bus, address 0xa20
    Chip `Nuvoton NCT6795D Super IO Sensors' (confidence: 9)

To load everything that is needed, add this to /etc/modules:
#----cut here----
# Chip drivers
nct6775
#----cut here----
If you have some drivers built into your kernel, the list above will
contain too many modules. Skip the appropriate ones!

Do you want to add these lines automatically to /etc/modules? (yes/NO)

Unloading cpuid... OK

adminguy@worker02:~$ sensors
amdgpu-pci-2900
Adapter: PCI adapter
vddgfx:           N/A
vddnb:            N/A
edge:         +36.0°C

k10temp-pci-00c3
Adapter: PCI adapter
Tctl:         +36.1°C
Tdie:         +36.1°C

Run Memory Tester On Ubuntu 20.04

GNU Grub 2.04 does have memtest86+ that could be used to perform RAM load tests. Below are some lines to enable testing from within the Linux OS:

# Install the memory tester
sudo apt install memtester

# Perform the RAM test
testSize=100MB
iterations=1
sudo memtester $testSize $iterations

# Sample result
ramtest@linuxbox:~$ sudo memtester $testSize $iterations
memtester version 4.5.0 (64-bit)
Copyright (C) 2001-2020 Charles Cazabon.
Licensed under the GNU General Public License version 2 (only).

pagesize is 4096
pagesizemask is 0xfffffffffffff000
want 200MB (209715200 bytes)
got  200MB (209715200 bytes), trying mlock ...locked.
Loop 1/1:
  Stuck Address       : ok
  Random Value        : ok
  Compare XOR         : ok
  Compare SUB         : ok
  Compare MUL         : ok
  Compare DIV         : ok
  Compare OR          : ok
  Compare AND         : ok
  Sequential Increment: ok
  Solid Bits          : ok
  Block Sequential    : ok
  Checkerboard        : ok
  Bit Spread          : ok
  Bit Flip            : ok
  Walking Ones        : ok
  Walking Zeroes      : ok
  8-bit Writes        : ok
  16-bit Writes       : ok

Done.

How to Setup Software RAID on Ubuntu 20.04

Step 1: prepare to configure RAID by checking the system

In the below example, we’re using a test Linux computer that is running Ubuntu 20.04. It has 4 x Samsung SSD hard drives models 860_EVO & 870_EVO. The intention of this exercise is to setup a RAID-10 (a pair of stripped mirrors). The output shows that sda & sdc are of the same model, while sdb & sdg are of the other model. For avoid ‘lowest common denominator’ issues, we’re ensuring that each set of stripe shall consist of similar performance drives.

# Verify whether system has the software raid package installed
kimconnect@devlinux02:~$ apt list -a mdadm
Listing... Done
mdadm/hirsute,now 4.1-10ubuntu3 amd64 [installed,automatic]

# Check the partitions
root@devlinux02:/home/kimconnect# lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0  68.8M  1 loop /snap/lxd/20037
loop1                       7:1    0  55.4M  1 loop /snap/core18/1997
loop2                       7:2    0  32.3M  1 loop /snap/snapd/11588
sda                         8:0    0 931.5G  0 disk
└─sda1                      8:1    0 931.5G  0 part
sdb                         8:16   0 931.5G  0 disk
└─sdb1                      8:17   0 931.5G  0 part
sdc                         8:32   0 931.5G  0 disk
└─sdc1                      8:33   0 931.5G  0 part
sdd                         8:48   1  29.8G  0 disk
├─sdd1                      8:49   1   512M  0 part /boot/efi
├─sdd2                      8:50   1     1G  0 part /boot
└─sdd3                      8:51   1  28.3G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0  28.3G  0 lvm  /
sdg                         8:96   0 931.5G  0 disk
└─sdg1                      8:97   0 931.5G  0 part

# Verify disk specs
ls -lF /dev/disk/by-id/

root@devlinux02:/home/kimconnect# ls -lF /dev/disk/by-id/|grep Samsung
lrwxrwxrwx 1 root root  9 Aug 13 00:38 ata-Samsung_SSD_860_EVO_1TB_S3Z8NB0KB46308J -> ../../sdb
lrwxrwxrwx 1 root root 10 Aug 13 00:46 ata-Samsung_SSD_860_EVO_1TB_S3Z8NB0KB46308J-part1 -> ../../sdb1
lrwxrwxrwx 1 root root  9 Aug 13 00:10 ata-Samsung_SSD_860_EVO_1TB_S5B3NDFN912396N -> ../../sdg
lrwxrwxrwx 1 root root 10 Aug 13 00:46 ata-Samsung_SSD_860_EVO_1TB_S5B3NDFN912396N-part1 -> ../../sdg1
lrwxrwxrwx 1 root root  9 Aug 13 00:10 ata-Samsung_SSD_870_QVO_1TB_S5VSNG0NA05357H -> ../../sda
lrwxrwxrwx 1 root root 10 Aug 13 00:46 ata-Samsung_SSD_870_QVO_1TB_S5VSNG0NA05357H-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 Aug 13 00:38 ata-Samsung_SSD_870_QVO_1TB_S5VSNJ0NC00894D -> ../../sdc
lrwxrwxrwx 1 root root 10 Aug 13 00:46 ata-Samsung_SSD_870_QVO_1TB_S5VSNJ0NC00894D-part1 -> ../../sdc1
lrwxrwxrwx 1 root root  9 Aug 13 00:38 scsi-0ATA_Samsung_SSD_860_S3Z8NB0KB46308J -> ../../sdb
lrwxrwxrwx 1 root root 10 Aug 13 00:46 scsi-0ATA_Samsung_SSD_860_S3Z8NB0KB46308J-part1 -> ../../sdb1
lrwxrwxrwx 1 root root  9 Aug 13 00:10 scsi-0ATA_Samsung_SSD_860_S5B3NDFN912396N -> ../../sdg
lrwxrwxrwx 1 root root 10 Aug 13 00:46 scsi-0ATA_Samsung_SSD_860_S5B3NDFN912396N-part1 -> ../../sdg1
lrwxrwxrwx 1 root root  9 Aug 13 00:10 scsi-0ATA_Samsung_SSD_870_S5VSNG0NA05357H -> ../../sda
lrwxrwxrwx 1 root root 10 Aug 13 00:46 scsi-0ATA_Samsung_SSD_870_S5VSNG0NA05357H-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 Aug 13 00:38 scsi-0ATA_Samsung_SSD_870_S5VSNJ0NC00894D -> ../../sdc
lrwxrwxrwx 1 root root 10 Aug 13 00:46 scsi-0ATA_Samsung_SSD_870_S5VSNJ0NC00894D-part1 -> ../../sdc1
lrwxrwxrwx 1 root root  9 Aug 13 00:38 scsi-1ATA_Samsung_SSD_860_EVO_1TB_S3Z8NB0KB46308J -> ../../sdb
lrwxrwxrwx 1 root root 10 Aug 13 00:46 scsi-1ATA_Samsung_SSD_860_EVO_1TB_S3Z8NB0KB46308J-part1 -> ../../sdb1
lrwxrwxrwx 1 root root  9 Aug 13 00:10 scsi-1ATA_Samsung_SSD_860_EVO_1TB_S5B3NDFN912396N -> ../../sdg
lrwxrwxrwx 1 root root 10 Aug 13 00:46 scsi-1ATA_Samsung_SSD_860_EVO_1TB_S5B3NDFN912396N-part1 -> ../../sdg1
lrwxrwxrwx 1 root root  9 Aug 13 00:10 scsi-1ATA_Samsung_SSD_870_QVO_1TB_S5VSNG0NA05357H -> ../../sda
lrwxrwxrwx 1 root root 10 Aug 13 00:46 scsi-1ATA_Samsung_SSD_870_QVO_1TB_S5VSNG0NA05357H-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 Aug 13 00:38 scsi-1ATA_Samsung_SSD_870_QVO_1TB_S5VSNJ0NC00894D -> ../../sdc
lrwxrwxrwx 1 root root 10 Aug 13 00:46 scsi-1ATA_Samsung_SSD_870_QVO_1TB_S5VSNJ0NC00894D-part1 -> ../../sdc1
lrwxrwxrwx 1 root root  9 Aug 13 00:38 scsi-SATA_Samsung_SSD_860_S3Z8NB0KB46308J -> ../../sdb
lrwxrwxrwx 1 root root 10 Aug 13 00:46 scsi-SATA_Samsung_SSD_860_S3Z8NB0KB46308J-part1 -> ../../sdb1
lrwxrwxrwx 1 root root  9 Aug 13 00:10 scsi-SATA_Samsung_SSD_860_S5B3NDFN912396N -> ../../sdg
lrwxrwxrwx 1 root root 10 Aug 13 00:46 scsi-SATA_Samsung_SSD_860_S5B3NDFN912396N-part1 -> ../../sdg1
lrwxrwxrwx 1 root root  9 Aug 13 00:10 scsi-SATA_Samsung_SSD_870_S5VSNG0NA05357H -> ../../sda
lrwxrwxrwx 1 root root 10 Aug 13 00:46 scsi-SATA_Samsung_SSD_870_S5VSNG0NA05357H-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 Aug 13 00:38 scsi-SATA_Samsung_SSD_870_S5VSNJ0NC00894D -> ../../sdc
lrwxrwxrwx 1 root root 10 Aug 13 00:46 scsi-SATA_Samsung_SSD_870_S5VSNJ0NC00894D-part1 -> ../../sdc1

Step 2: Setup RAID

# Initialize disks as Raid capable
diskArray=(a b c g)
partitionType=msdos
fileSystemType=ext4

for item in "${diskArray[@]}"
do
  device=/dev/sd$item
  if grep -q $device /proc/mounts;
  then
    echo "Disk $device is currently mounted. Skipping it..."
  else
    echo "configuring $device"
    parted -a optimal -s $device mklabel $partitionType
    parted -a optimal -s $device mkpart primary $fileSystemType 0% 100%
    parted -a optimal -s $device set 1 raid on
    parted -a optimal -s $device print # display partition table
  fi
done

# Create RAID
disksCount=4
diskList=[abcg]
raidMount=/dev/md0
raidLevel=10 # options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6, 6, raid10, 10, multipath

# Warning: this command will destroy data on disks that have previously been members of other RAID arrays
# yes | mdadm --create $raidMount --level=$raidLevel --raid-devices=$disksCount /dev/sd$diskList\1
mdadm --create $raidMount --level=$raidLevel --raid-devices=$disksCount /dev/sd$diskList\1

Optional: Troubleshooting

# How to remove disks from RAID array
raidMount=/dev/md0
diskArray=(a b c g)
for item in "${diskArray[@]}"
do
  device=/dev/sd$item
  mdadm $raidMount -r $device
done

# How to stop the RAID array (as a prerequisite to rebuilding or re-configuring)
raidMount=/dev/md0
mdadm -S $raidMount

root@devlinux02:/home/kimconnect# mdadm -S /dev/md0
mdadm: stopped /dev/md0

# Verify the new RAID array
root@devlinux02:/home/kimconnect# lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINT
loop0                       7:0    0  55.4M  1 loop   /snap/core18/1997
loop1                       7:1    0  61.8M  1 loop   /snap/core20/1081
loop2                       7:2    0  55.4M  1 loop   /snap/core18/2128
loop3                       7:3    0  68.3M  1 loop   /snap/lxd/21260
loop4                       7:4    0  68.8M  1 loop   /snap/lxd/20037
loop5                       7:5    0  32.3M  1 loop   /snap/snapd/12704
loop6                       7:6    0  32.3M  1 loop   /snap/snapd/11588
sda                         8:0    0 931.5G  0 disk
└─sda1                      8:1    0 931.5G  0 part
  └─md0                     9:0    0   1.8T  0 raid10
sdb                         8:16   0 931.5G  0 disk
└─sdb1                      8:17   0 931.5G  0 part
  └─md0                     9:0    0   1.8T  0 raid10
sdc                         8:32   0 931.5G  0 disk
└─sdc1                      8:33   0 931.5G  0 part
  └─md0                     9:0    0   1.8T  0 raid10
sdd                         8:48   1  29.8G  0 disk
├─sdd1                      8:49   1   512M  0 part   /boot/efi
├─sdd2                      8:50   1     1G  0 part   /boot
└─sdd3                      8:51   1  28.3G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0  28.3G  0 lvm    /
sdg                         8:96   0 931.5G  0 disk
└─sdg1                      8:97   0 931.5G  0 part
  └─md0                     9:0    0   1.8T  0 raid10

# Show RAID status
root@devlinux02:/home/kimconnect# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Aug 13 00:46:13 2021
        Raid Level : raid10
        Array Size : 1953257472 (1862.77 GiB 2000.14 GB)
     Used Dev Size : 976628736 (931.39 GiB 1000.07 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Aug 13 01:05:18 2021
             State : clean, resyncing
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : bitmap

     Resync Status : 11% complete

              Name : devlinux02:0  (local to host devlinux02)
              UUID : 3287cfe9:7a213a3f:381214bb:05564dd4
            Events : 200

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync set-A   /dev/sda1
       1       8       17        1      active sync set-B   /dev/sdb1
       2       8       33        2      active sync set-A   /dev/sdc1
       3       8       97        3      active sync set-B   /dev/sdg1

Step 3: Partition the new RAID Array

# Create a partition (volume) on the RAID array
mkfs.ext4 /dev/md0

root@devlinux02:/home/kimconnect# mkfs.ext4 /dev/md0
mke2fs 1.45.7 (28-Jan-2021)
Discarding device blocks: done
Creating filesystem with 488314368 4k blocks and 122085376 inodes
Filesystem UUID: 29e6572f-a9f1-486a-85a5-874b7bf1ff9d
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

# Mount the volume
mount=/nfs-share
mkdir $mount
raidMount=/dev/md0
mount $raidMount $mount

# Check the mount
df -hT -P $mount

root@devlinux02:/home/kimconnect# df -hT -P $mount
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md0       ext4  1.8T   77M  1.7T   1% /mnt/raid10

# check the running Raid config
root@devlinux02:/home/kimconnect# mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2 name=devlinux02:0 UUID=3287cfe9:7a213a3f:381214bb:05564dd4

# Read the contents of persistent RAID array config
cat /etc/mdadm/mdadm.conf

# Append living config into the persistent config file
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

# Update file system initialization
update-initramfs -u

# Search for the new raid mount point to retrieve it's uuid expression
root@devlinux02:~$ ls -la /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root 120 Aug 13 03:58 .
drwxr-xr-x 7 root root 140 Aug 13 03:58 ..
lrwxrwxrwx 1 root root   9 Aug 13 03:58 29e6572f-a9f1-486a-85a5-874b7bf1ff9d -> ../../md0
lrwxrwxrwx 1 root root  10 Aug 13 03:58 322fa7a9-da0a-4d01-9b97-1d4878852f07 -> ../../dm-0
lrwxrwxrwx 1 root root  10 Aug 13 03:58 73C5-267E -> ../../sdd1
lrwxrwxrwx 1 root root  10 Aug 13 03:58 e2d8ca2f-df9b-4b6c-af98-d66e3723e459 -> ../../sdd2

# Enable mount to persist on reboots
mount=/nfs-share
fileSystemType=ext4
uuid=29e6572f-a9f1-486a-85a5-874b7bf1ff9d
echo "/dev/disk/by-uuid/$uuid $mount $fileSystemType defaults 0 1" >> /etc/fstab

The above illustration has taken into account the type of file system named ext4, instead of btrfs, xfs, zfs, etc. At the time of this writing, ext4 is the standard for most Ubuntu machines. It does yield faster transfer speed than btrfs with the down side of lacking checksum, snapshots, and other modern file system features. Still, ext4 is know for stability while the other types of fs are considered bleeding edge.

Moreover, the last step of configuring a file system on a RAID array would require a reboot to verify that the newly mounted volume would persist upon restarts.

Linux: Bash Shell Script To Move/Archive Old Files

# create a script to move files (notice the regex escape for special chars)
sudo cat << EOF > /usr/local/sbin/movefiles
#!/bin/bash
sourceDirectory="\$1"
moveTo="\$2"
olderThan="\$3"
find \$sourceDirectory -name "*"  -type f -mtime +\$olderThan -print0 | while IFS= read -r -d '' file; do mv \$file \$moveTo; done
EOF

# Mark the file with executable permissions
sudo chmod +x /usr/local/sbin/movefiles

# Create a new entry in cron
sudo crontab -e
 
# Insert these new lines
0 2 * * * /usr/local/sbin/movefiles /home/path/tosource/ /path/todestination/ 30 > /dev/null

How To Use NXLog On A Windows Client

Step 1: Setup Server
- Install a log aggregation server is out of scope of this document

Step 2: Setup Client
- Download and install NXLog Client:
  - Latest edition: https://nxlog.co/products/nxlog-community-edition/download
  - Automated install: choco install -y nxlog (version 2.10.2150 as of August 2021)
- Readme File content:
    Please edit the configuration file after installation. This file is
    located in the `conf` directory where NXLog was installed (default
    `C:\Program Files (x86)\nxlog\conf\nxlog.conf` on 64-bit Windows). If
    you chose a custom installation directory, you will need to update the
    ROOT directory specified in the configuration file before the NXLog
    service will start.

    The NXLog service can be started from the Services console (run
    `services.msc`) or will be started automatically at the next
    boot. Alternatively, the service can be started by executing
    `nxlog.exe`, located in the installation directory. The `-f` command
    line argument can be used to run NXLog in the foreground.

    By default, NXLog will write its own messages to the log file named
    `nxlog.log` in the `data` directory (default `C:\Program Files
    (x86)\nxlog\data\nxlog.log` on 64-bit Windows). If you have trouble
    starting or running NXLog, check that file for errors.

    See the NXLog Reference Manual for details about configuration and
    usage. The Reference Manual is installed in the `doc` directory
    (default `C:\Program Files (x86)\nxlog\doc` on 64-bit Windows) and
    should also be available online at <https://nxlog.co/resources>.
- Configure:
  - How to configure: https://nxlog.co/eventlog-to-syslog
  - Edit the file at C:\Program Files (x86)\nxlog\conf\nxlog.conf  

# Uncomment these line if using Graylog
#<Extension _gelf>
#    Module xm_gelf
#</Extension>

# How to generate query: https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/advanced-xml-filtering-in-the-windows-event-viewer/ba-p/399761
# Please note that indentation is important!
<Input im_msvistalog>
    Module      im_msvistalog
    Query   <QueryList>\
        	<Query Id="0">\
                # This is an example of Windows logon RDP mode 10
            		<Select Path="Security">*[System[(EventID=4624)]] and *[EventData[(Data=10)]]</Select>\
        	</Query>\
    	</QueryList>
</Input>

<Output om_udp>
  Module  om_udp
  Host  [IPADDRESSHERE]
  Port  514
  Exec  to_syslog_snare(); # Use this if Graylog: OutputType  GELF
</Output>

<Route out>
  Path  im_msvistalog  => om_udp
</Route>

- Confirm whether remote host port is reachable from client
  PS C:\Windows\system32> test-netconnection syslog-server -port 514
  ComputerName           : syslog-server
  RemoteAddress          : x.x.x.x
  RemotePort             : 514
  InterfaceAlias         : Ethernet 2
  SourceAddress          : x.x.x.x
  PingSucceeded          : True
  PingReplyDetails (RTT) : 1 ms
  TcpTestSucceeded       : True

- Start NXLog client service
  PS C:\Windows\system32> start-service NXLog

- Troubleshooting
  - Sample problem:
    PS C:\Windows\system32> start-service nxlog
    start-service : Service 'nxlog (nxlog)' cannot be started due to the following error: Cannot start service nxlog on
    computer '.'.
    At line:1 char:1
    + start-service nxlog
    + ~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : OpenError: (System.ServiceProcess.ServiceController:ServiceController) [Start-Service],
      ServiceCommandException
        + FullyQualifiedErrorId : CouldNotStartService,Microsoft.PowerShell.Commands.StartServiceCommand

    PS C:\Windows\system32> &"C:\Program Files (x86)\nxlog\nxlog.exe" -c "C:\Program Files (x86)\nxlog\conf\nxlog.conf"
    Expected </Input> but saw </Query> at C:\Program Files (x86)\nxlog\conf\nxlog.conf:53
  - Resolution to error message above is to fix the indentation of the config file

- Check service connections
  PS C:\Windows\system32> netstat -nbt| select-string -Pattern "nxlog" -Context 2
    [rstudio.exe]
      TCP    127.0.0.1:61682        127.0.0.1:61683        ESTABLISHED
  >  [nxlog.exe]
      TCP    127.0.0.1:61683        127.0.0.1:61682        ESTABLISHED

- Check server for established tcp connections
  [thanos@syslog-server thanos] #
  remoteIP=x.x.x.x
  match=$null
  while [ -z "$match" ]
  do
    clear
    echo "Checking for incoming connection from $remoteIP"
    match=$(netstat -na|grep $remoteIP)
    [[ ! -z "$match" ]] && echo "$match" || sleep 1
  done

- check which daemons & owners listen to which ports
  [thanos@syslog-server thanos] # netstat -elpt
  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name
  tcp        0      0 0.0.0.0:sunrpc          0.0.0.0:*               LISTEN      root       7956       1/systemd
  tcp        0      0 0.0.0.0:33427           0.0.0.0:*               LISTEN      root       13042      -
  tcp        0      0 localhost:domain        0.0.0.0:*               LISTEN      systemd-resolve 8111       673/systemd-resolve
  tcp        0      0 0.0.0.0:ssh             0.0.0.0:*               LISTEN      root       12384      1162/sshd: /usr/sbi
  tcp        0      0 localhost.localdom:smtp 0.0.0.0:*               LISTEN      root       9157       787/master
  tcp        0      0 0.0.0.0:46463           0.0.0.0:*               LISTEN      rpcuser    12960      1275/rpc.statd
  tcp        0      0 0.0.0.0:shell           0.0.0.0:*               LISTEN      root       11650      1252/rsyslogd
  tcp        0      0 localhost.localdom:smux 0.0.0.0:*               LISTEN      root       9674       811/snmpd

- Check RDP logging activities
  client=SYSLOGCLIENT
  year=$(date +"%Y")
  clientlog=/var/log/rsyslog/hosts/$client/$year/messages
  rdpLogFilter=".*Source Port:  \w+"
  tail -f $clientlog | grep -oP $rdpLogFilter
  cat $clientlog | grep -oP $rdpLogFilter

Configuration Example 1:

Panic Soft
#NoFreeOnExit TRUE

define ROOT     C:\Program Files (x86)\nxlog
define CERTDIR  %ROOT%\cert
define CONFDIR  %ROOT%\conf
define LOGDIR   %ROOT%\data
define LOGFILE  %LOGDIR%\nxlog.log
LogFile %LOGFILE%

Moduledir %ROOT%\modules
CacheDir  %ROOT%\data
Pidfile   %ROOT%\data\nxlog.pid
SpoolDir  %ROOT%\data

<Extension _syslog>
    Module      xm_syslog
</Extension>

<Extension _charconv>
    Module      xm_charconv
    AutodetectCharsets iso8859-2, utf-8, utf-16, utf-32
</Extension>

<Extension _exec>
    Module      xm_exec
</Extension>

<Extension _fileop>
    Module      xm_fileop

    # Check the size of our log file hourly, rotate if larger than 5MB
    <Schedule>
        Every   1 hour
        Exec    if (file_exists('%LOGFILE%') and \
                   (file_size('%LOGFILE%') >= 5M)) \
                    file_cycle('%LOGFILE%', 8);
    </Schedule>

    # Rotate our log file every week on Sunday at midnight
    <Schedule>
        When    @weekly
        Exec    if file_exists('%LOGFILE%') file_cycle('%LOGFILE%', 8);
    </Schedule>
</Extension>

<Extension _gelf>
    Module xm_gelf
</Extension>

# How to generate query: https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/advanced-xml-filtering-in-the-windows-event-viewer/ba-p/399761
# Please note that indentation is important!
<Input from_eventlog>
    Module      im_msvistalog
    <QueryXML>
        <QueryList>
            <Query Id="0">
                <Select Path="Security">
                    *[System[Level=0 and (EventID=4624 or EventID=4647)]]
                </Select>
            </Query>
        </QueryList>
    </QueryXML>
</Input>

# Source: https://nxlog.co/documentation/nxlog-user-guide/forwarding.html
<Output om_udp>
    Module      om_udp # OR om_tcp (depending on server listening port discovered in http://{graylog.server.url}/system/inputs)
    Host        10.10.10.88
    Port        12222
    OutputType  GELF_UDP # OR GELF_TCP OR Exec to_syslog_ietf(); OR to_syslog_snare(); OR Binary OR to_json(); 
</Output>

<Route out>
    Path    from_eventlog  => out
</Route>

Configuration Example 2:

Panic Soft
#NoFreeOnExit TRUE
 
define ROOT     C:\Program Files (x86)\nxlog
define CERTDIR  %ROOT%\cert
define CONFDIR  %ROOT%\conf
define LOGDIR   %ROOT%\data
define LOGFILE  %LOGDIR%\nxlog.log
LogFile %LOGFILE%
 
Moduledir %ROOT%\modules
CacheDir  %ROOT%\data
Pidfile   %ROOT%\data\nxlog.pid
SpoolDir  %ROOT%\data
 
<Extension _syslog>
    Module      xm_syslog
</Extension>
 
<Extension _charconv>
    Module      xm_charconv
    AutodetectCharsets iso8859-2, utf-8, utf-16, utf-32
</Extension>
 
<Extension _exec>
    Module      xm_exec
</Extension>
 
<Extension _fileop>
    Module      xm_fileop
 
    # Check the size of our log file hourly, rotate if larger than 5MB
    <Schedule> 
        Every   1 hour
        Exec    if (file_exists('%LOGFILE%') and \
                   (file_size('%LOGFILE%') >= 5M)) \
                    file_cycle('%LOGFILE%', 8);
    </Schedule>
 
    # Rotate our log file every week on Sunday at midnight
    <Schedule>
        When    @weekly
        Exec    if file_exists('%LOGFILE%') file_cycle('%LOGFILE%', 8);
    </Schedule>
</Extension>
 
<Extension _gelf>
    Module xm_gelf
</Extension>
 
<Input im_msvistalog>
    Module      im_msvistalog
    Query   <QueryList>\
        	<Query Id="0">\
					<Select Path="System">*[System[(Level=1 or Level=2 or Level=3)]]</Select>\
            		<Select Path="Application">*[System[(Level=1 or Level=2 or Level=3)]]</Select>\
					<Select Path="Security">*[System[Level=0 and (EventID=4624 or EventID=4647)]]</Select>\
        	</Query>\
    	</QueryList>
</Input>

<Output send_graylog>
    Module      om_tcp
    Host        10.10.10.88
    Port        12201
    OutputType  GELF_TCP
</Output>

<Processor norepeat>
	Module pm_norepeat
	CheckFields Message
</Processor>

<Route out>
    Path    im_msvistalog  => norepeat => send_graylog
</Route>

Example 3:

Panic Soft
#NoFreeOnExit TRUE
 
define ROOT     C:\Program Files (x86)\nxlog
define CERTDIR  %ROOT%\cert
define CONFDIR  %ROOT%\conf
define LOGDIR   %ROOT%\data
define LOGFILE  %LOGDIR%\nxlog.log
LogFile %LOGFILE%
 
Moduledir %ROOT%\modules
CacheDir  %ROOT%\data
Pidfile   %ROOT%\data\nxlog.pid
SpoolDir  %ROOT%\data
 
<Extension _syslog>
    Module      xm_syslog
</Extension>
 
<Extension _charconv>
    Module      xm_charconv
    AutodetectCharsets iso8859-2, utf-8, utf-16, utf-32
</Extension>
 
<Extension _exec>
    Module      xm_exec
</Extension>
 
<Extension _fileop>
    Module      xm_fileop
 
    # Check the size of our log file hourly, rotate if larger than 5MB
    <Schedule> 
        Every   1 hour
        Exec    if (file_exists('%LOGFILE%') and \
                   (file_size('%LOGFILE%') >= 5M)) \
                    file_cycle('%LOGFILE%', 8);
    </Schedule>
 
    # Rotate our log file every week on Sunday at midnight
    <Schedule>
        When    @weekly
        Exec    if file_exists('%LOGFILE%') file_cycle('%LOGFILE%', 8);
    </Schedule>
</Extension>
 
<Extension _gelf>
    Module xm_gelf
</Extension>

# This example would filter the Windows Event Logs to capture only Network and Interactive logons
# Sources:
# - https://nxlog.co/documentation/nxlog-user-guide/eventlog-filtering.html
# - https://eventlogxp.com/blog/logon-type-what-does-it-mean/
<Input winEvents>
    Module      im_msvistalog
    Query   <QueryList>\
        	<Query Id="0">\
					<Select Path="Security">*[System[Level=0 and (EventID=4624 or EventID=4647)]]</Select>\
        	</Query>\
    	</QueryList>
	<Exec>
        if ($TargetUserName == "SYSTEM" OR $LogonType in ("4", "5", "7", "9"))
		drop();
    </Exec>
</Input>

<Output send_graylog>
    Module      om_tcp
    Host        0.0.100.88
    Port        12201
    OutputType  GELF_TCP
</Output>

<Processor norepeat>
	Module pm_norepeat
	CheckFields Message
</Processor>

<Route out>
    Path    winEvents => norepeat => send_graylog
</Route>

Windows WSL: How to Fix Broken bashrc File

Problem:

When there’s an incorrect setup of Windows Subsystem for Linux (WSL), where $HOME/.bashrc is corrupted or having an ‘exit’ statement during its runtime, Windows cannot log into is WSL using the normal wsl command.

How To Fix:
# Execute bash using a different home directory
C:\Windows\System32\bash.exe ~ -c /bin/bash

# Edit the broken bashrc file
vim ~/.bashrc

 

Pihole Error: Tried 100 Times to Connect to FTL Server

Error Message:
DataTables warning: table id=all-queries - Tried 100 times to connect to FTL server, but never got proper reply. Please check Port and logs!
Resolution:

Case Standalone Linux OS Installation:

sudo service pihole-FTL stop
sudo mv /etc/pihole/pihole-FTL.db/etc/pihole/pihole-FTL-damaged.db
sudo service pihole-FTL start

Case Kubernetes Cluster:

k scale deployment --replicas=0 pihole
# wait a few seconds for existing pods to terminate
sudo mv /mnt/pathToNfsPihole/pihole-FTL.db /mnt/pathToNfsPihole/pihole-FTL.db.bad
k scale deployment --replicas=1 pihole