How To Install Graylog in a Kubernetes Cluster Using Helm Charts

The following narrative is based on the assumption that a Kubernetes (current stable version 20.10) has been setup using MetalLB Ingress controller. This should also work with Traefik or other load balancers.

# Create a separate namespace for this project
kubectl create namespace graylog

# Change into the graylog namespace
kubectl config set-context --current --namespace=graylog
kubectl config view --minify | grep namespace: # Validate it

# Optional: delete previous test instances of graylog that have been deployed via Helm
helm delete "graylog" --namespace graylog
kubectl delete pvc --namespace graylog --all

# How to switch execution context back to the 'default' namespace
kubectl config set-context --current --namespace=default

# Optional: installing mongdb prior to Graylog
helm install "mongodb" bitnami/mongodb --namespace "graylog" \
  --set persistence.size=100Gi
# Sample output:
NAME: mongodb
LAST DEPLOYED: Thu Aug 29 00:07:36 2021
NAMESPACE: graylog
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
MongoDB® can be accessed on the following DNS name(s) and ports from within your cluster:
    mongodb.graylog.svc.cluster.local
To get the root password run:
    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace graylog mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
To connect to your database, create a MongoDB® client container:
    kubectl run --namespace graylog mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.8-debian-10-r9 --command -- bash
Then, run the following command:
    mongo admin --host "mongodb" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
To connect to your database from outside the cluster execute the following commands:
    kubectl port-forward --namespace graylog svc/mongodb 27017:27017 &
    mongo --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD

# REQUIRED: Pre-install ElasticSearch version 7.10 as highest being supported by Graylog 4.1.3
# Source: https://artifacthub.io/packages/helm/elastic/elasticsearch/7.10.2
helm repo add elastic https://helm.elastic.co
helm repo update
helm install elasticsearch elastic/elasticsearch --namespace "graylog" \
  --set imageTag=7.10.2 \
  --set data.persistence.size=100Gi
# Sample output:
NAME: elasticsearch
LAST DEPLOYED: Sun Aug 29 04:35:30 2021
NAMESPACE: graylog
STATUS: deployed
REVISION: 1
NOTES:
1. Watch all cluster members come up.
  $ kubectl get pods --namespace=graylog -l app=elasticsearch-master -w
2. Test cluster health using Helm test.
  $ helm test elasticsearch

# Installation of Graylog with mongodb bundled, while integrating with a pre-deployed elasticSearch instance
#
# This install command assumes that the protocol preference for transporting logs is TCP
# Also, the current helm chart does not allow mixing TCP with UDP; therefore, this approach is conveniently
# matching business requirements where a reliable transmission TCP protocol is necessary to record security data.
helm install graylog kongz/graylog --namespace "graylog" \
  --set graylog.image.repository="graylog/graylog:4.1.3-1" \
  --set graylog.persistence.size=200Gi \
  --set graylog.service.type=LoadBalancer \
  --set graylog.service.port=80 \
  --set graylog.service.loadBalancerIP=10.10.100.88 \
  --set graylog.service.externalTrafficPolicy=Local \
  --set graylog.service.ports[0].name=gelf \
  --set graylog.service.ports[0].port=12201 \
  --set graylog.service.ports[1].name=syslog \
  --set graylog.service.ports[1].port=514 \
  --set graylog.rootPassword="SOMEPASSWORD" \
  --set tags.install-elasticsearch=false \
  --set graylog.elasticsearch.version=7 \
  --set graylog.elasticsearch.hosts=http://elasticsearch-master.graylog.svc.cluster.local:9200

# Optional: add these lines if the mongodb component has been installed separately
  --set tags.install-mongodb=false \
  --set graylog.mongodb.uri=mongodb://mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local:27017/graylog?replicaSet=rs0 \

# Moreover, the graylog chart version 1.8.4 doesn't seem to set externalTrafficPolicy as expected.
# Set externalTrafficPolicy = local to preserve source client IPs
kubectl patch svc graylog-web -n graylog -p '{"spec":{"externalTrafficPolicy":"Local"}}'

# Sometimes, the static EXTERNAL-IP would be assigned to graylog-master, where graylog-web EXTERNAL-IP would
# remain in the status of <pending> indefinitely.
# Workaround: set services to share a single external IP
kubectl patch svc graylog-web -p '{"metadata":{"annotations":{"metallb.universe.tf/allow-shared-ip":"graylog"}}}'
kubectl patch svc graylog-master -p '{"metadata":{"annotations":{"metallb.universe.tf/allow-shared-ip":"graylog"}}}'
kubectl patch svc graylog-master -n graylog -p '{"spec": {"type": "LoadBalancer", "externalIPs":["10.10.100.88"]}}'
kubectl patch svc graylog-web -n graylog -p '{"spec": {"type": "LoadBalancer", "externalIPs":["10.10.100.88"]}}'

# Test sending logs to server via TCP
graylog-server=graylog.kimconnect.com
echo -e '{"version": "1.1","host":"kimconnect.com","short_message":"Short message","full_message":"This is a\n\nlong message","level":9000,"_user_id":9000,"_ip_address":"1.1.1.1","_location":"LAX"}\0' | nc -w 1 $graylog-server 514

# Test via UDP
graylog-server=graylog.kimconnect.com
echo -e '{"version": "1.1","host":"kimconnect.com","short_message":"Short message","full_message":"This is a\n\nlong message","level":9000,"_user_id":9000,"_ip_address":"1.1.1.1","_location":"LAX"}\0' | nc -u -w 1 $graylog-server 514

# Optional: graylog Ingress
cat > graylog-ingress.yaml <<EOF
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: graylog-ingress
  namespace: graylog
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # set these for SSL
    # ingress.kubernetes.io/rewrite-target: /
    # acme http01
    # acme.cert-manager.io/http01-edit-in-place: "true"
    # acme.cert-manager.io/http01-ingress-class: "true"
    # kubernetes.io/tls-acme: "true"  
spec:
  rules:
  - host: graylog.kimconnect.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: graylog-web
            port:
              number: 80
      - path: /
        pathType: Prefix
        backend:
          service:
            name: graylog-web
            port:
              number: 12201
      - path: /
        pathType: Prefix
        backend:
          service:
            name: graylog-web
            port:
              number: 514              
EOF
kubectl apply -f graylog-ingress.yaml

Troubleshooting Notes:

# Sample commands to patch graylog service components
kubectl patch svc graylog-web -p '{"spec":{"type":"LoadBalancer"}}' # Convert ClusterIP to LoadBalancer to gain ingress
kubectl patch svc graylog-web -p '{"spec":{"externalIPs":["10.10.100.88"]}}' # Add externalIPs
kubectl patch svc graylog-master -n graylog -p '{"spec":{"loadBalancerIP":""}}' # Remove loadBalancer IPs
kubectl patch svc graylog-master -n graylog -p '{"status":{"loadBalancer":{"ingress":[]}}}' # Purge ingress IPs
kubectl patch svc graylog-web -n graylog -p '{"status":{"loadBalancer":{"ingress":[{"ip":"10.10.100.88"}]}}}'
kubectl patch svc graylog-web -n graylog -p '{"status":{"loadBalancer":{"ingress":[]}}}'

# Alternative solution: mixing UDP with TCP
# The current chart version only allows this when service Type = ClusterIP (default)
helm upgrade graylog kongz/graylog --namespace "graylog" \
  --set graylog.image.repository="graylog/graylog:4.1.3-1" \
  --set graylog.persistence.size=200Gi \
  --set graylog.service.externalTrafficPolicy=Local \
  --set graylog.service.port=80 \
  --set graylog.service.ports[0].name=gelf \
  --set graylog.service.ports[0].port=12201 \
  --set graylog.service.ports[0].protocol=UDP \
  --set graylog.service.ports[1].name=syslog \
  --set graylog.service.ports[1].port=514 \
  --set graylog.service.ports[1].protocol=UDP \
  --set graylog.rootPassword="SOMEPASSWORD" \
  --set tags.install-elasticsearch=false \
  --set graylog.elasticsearch.version=7 \
  --set graylog.elasticsearch.hosts=http://elasticsearch-master.graylog.svc.cluster.local:9200

# Error message occurs when combing TCP with UDP; hence, a ClusterIP must be specified
Error: UPGRADE FAILED: cannot patch "graylog-web" with kind Service: Service "graylog-web" is invalid: spec.ports: Invalid value: []core.ServicePort{core.ServicePort{Name:"graylog", Protocol:"TCP", AppProtocol:(*string)(nil), Port:80, TargetPort:intstr.IntOrString{Type:0, IntVal:9000, StrVal:""}, NodePort:32518}, core.ServicePort{Name:"gelf", Protocol:"UDP", AppProtocol:(*string)(nil), Port:12201, TargetPort:intstr.IntOrString{Type:0, IntVal:12201, StrVal:""}, NodePort:0}, core.ServicePort{Name:"gelf2", Protocol:"TCP", AppProtocol:(*string)(nil), Port:12222, TargetPort:intstr.IntOrString{Type:0, IntVal:12222, StrVal:""}, NodePort:31523}, core.ServicePort{Name:"syslog", Protocol:"TCP", AppProtocol:(*string)(nil), Port:514, TargetPort:intstr.IntOrString{Type:0, IntVal:514, StrVal:""}, NodePort:31626}}: may not contain more than 1 protocol when type is 'LoadBalancer'

# Set array type value instead of string
Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Service.spec.externalIPs): invalid type for io.k8s.api.core.v1.ServiceSpec.externalIPs: got "string", expected "array"
# Solution:
--set "array={a,b,c}" OR --set service[0].port=80

# Graylog would not start and this was the error:
com.github.joschi.jadconfig.ValidationException: Parent directory /usr/share/graylog/data/journal for Node ID file at /usr/share/graylog/data/journal/node-id is not writable

# Workaround
graylogData=/mnt/k8s/graylog-journal-graylog-0-pvc-04dd9c7f-a771-4041-b549-5b4664de7249/
chown -fR 1100:1100 $graylogData

NAME: graylog
LAST DEPLOYED: Thu Aug 29 03:26:00 2021
NAMESPACE: graylog
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To connect to your Graylog server:
1. Get the application URL by running these commands:
  Graylog Web Interface uses JavaScript to get detail of each node. The client JavaScript cannot communicate to node when service type is `ClusterIP`.
  If you want to access Graylog Web Interface, you need to enable Ingress.
    NOTE: Port Forward does not work with web interface.
2. The Graylog root users
  echo "User: admin"
  echo "Password: $(kubectl get secret --namespace graylog graylog -o "jsonpath={.data['graylog-password-secret']}" | base64 --decode)"
To send logs to graylog:
  NOTE: If `graylog.input` is empty, you cannot send logs from other services. Please make sure the value is not empty.
        See https://github.com/KongZ/charts/tree/main/charts/graylog#input for detail

k describe pod graylog-0
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  11m                   default-scheduler  0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  11m                   default-scheduler  0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         11m                   default-scheduler  Successfully assigned graylog/graylog-0 to linux03
  Normal   Pulled            11m                   kubelet            Container image "alpine" already present on machine
  Normal   Created           11m                   kubelet            Created container setup
  Normal   Started           10m                   kubelet            Started container setup
  Normal   Started           4m7s (x5 over 10m)    kubelet            Started container graylog-server
  Warning  Unhealthy         3m4s (x4 over 9m14s)  kubelet            Readiness probe failed: Get "http://172.16.90.197:9000/api/system/lbstatus": dial tcp 172.16.90.197:9000: connect: connection refused
  Normal   Pulled            2m29s (x6 over 10m)   kubelet            Container image "graylog/graylog:4.1.3-1" already present on machine
  Normal   Created           2m19s (x6 over 10m)   kubelet            Created container graylog-server
  Warning  BackOff           83s (x3 over 2m54s)   kubelet            Back-off restarting failed container

Readiness probe failed: Get http://api/system/lbstatus: dial tcp 172.16.90.197:9000: connect: connection refused

# Set external IP
# This only works on LoadBalancer, not ClusterIP
# kubectl patch svc graylog-web -p '{"spec":{"externalIPs":["10.10.100.88"]}}'
# kubectl patch svc graylog-master -p '{"spec":{"externalIPs":[]}}'

kubectl patch service graylog-web --type='json' -p='[{"op": "add", "path": "/metadata/annotations/kubernetes.io~1ingress.class", "value":"nginx"}]'

# Set annotation to allow shared IPs between 2 different services
kubectl annotate service graylog-web metallb.universe.tf/allow-shared-ip=graylog
kubectl annotate service graylog-master metallb.universe.tf/allow-shared-ip=graylog

metadata:
  name: $serviceName-tcp
  annotations:
    metallb.universe.tf/address-pool: default
    metallb.universe.tf/allow-shared-ip: psk

# Ingress
appName=graylog
domain=graylog.kimconnect.com
deploymentName=graylog-web
containerPort=9000
cat <<EOF> $appName-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: $appName-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # ingress.kubernetes.io/rewrite-target: /
    # acme http01
    # acme.cert-manager.io/http01-edit-in-place: "true"
    # acme.cert-manager.io/http01-ingress-class: "true"
    # kubernetes.io/tls-acme: "true"
spec:
  rules:
  - host: $domain
    http:
      paths:
      - backend:
          service:
            name: $deploymentName
            port:
              number: 9000
        path: /
        pathType: Prefix
EOF
kubectl apply -f $appName-ingress.yaml

# delete pvc's
namespace=graylog
kubectl delete pvc data-graylog-elasticsearch-data-0 -n $namespace
kubectl delete pvc data-graylog-elasticsearch-master-0 -n $namespace
kubectl delete pvc datadir-graylog-mongodb-0 -n $namespace
kubectl delete pvc journal-graylog-0 -n $namespace

# delete all pvc's in namespace the easier way
namespace=graylog
kubectl get pvc -n $namespace | awk '$1 {print$1}' | while read vol; do kubectl delete pvc/${vol} -n $namespace; done

2021-08-20 20:19:41,048 INFO    [cluster] - Exception in monitor thread while connecting to server mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local:27017 - {}
com.mongodb.MongoSocketException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local
        at com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:211) ~[graylog.jar:?]
        at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:75) ~[graylog.jar:?]
        at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[graylog.jar:?]
        at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[graylog.jar:?]
        at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) [graylog.jar:?]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_302]
Caused by: java.net.UnknownHostException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local
        at java.net.InetAddress.getAllByName0(InetAddress.java:1281) ~[?:1.8.0_302]
        at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[?:1.8.0_302]
        at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[?:1.8.0_302]
        at com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:203) ~[graylog.jar:?]
        ... 5 more

2021-08-20 20:19:42,981 INFO    [cluster] - No server chosen by com.mongodb.client.internal.MongoClientDelegate$1@69419d59 from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local}, caused by {java.net.UnknownHostException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local}}]}. Waiting for 30000 ms before timing out - {}

# Alternative version - that doesn't work
# helm repo add groundhog2k https://groundhog2k.github.io/helm-charts/
# helm install graylog groundhog2k/graylog --namespace "graylog" \
#   --set image.tag=4.1.3-1 \
#   --set settings.http.publishUri='http://127.0.0.1:9000/' \
#   --set service.type=LoadBalancer \
#   --set service.loadBalancerIP=192.168.100.88 \
#   --set elasticsearch.enabled=true \
#   --set mongodb.enabled=true

# helm upgrade graylog groundhog2k/graylog --namespace "graylog" \
#   --set image.tag=4.1.3-1 \
#   --set settings.http.publishUri=http://localhost:9000/ \
#   --set service.externalTrafficPolicy=Local \
#   --set service.type=LoadBalancer \
#   --set service.loadBalancerIP=192.168.100.88 \
#   --set elasticsearch.enabled=true \
#   --set mongodb.enabled=true \
#   --set storage.className=nfs-client \
#   --set storage.requestedSize=200Gi

# kim@linux01:~$ k logs graylog-0
# 2021-08-29 03:47:09,345 ERROR: org.graylog2.bootstrap.CmdLineTool - Invalid configuration
# com.github.joschi.jadconfig.ValidationException: Couldn't run validator method
#         at com.github.joschi.jadconfig.JadConfig.invokeValidatorMethods(JadConfig.java:227) ~[graylog.jar:?]
#         at com.github.joschi.jadconfig.JadConfig.process(JadConfig.java:100) ~[graylog.jar:?]
#         at org.graylog2.bootstrap.CmdLineTool.processConfiguration(CmdLineTool.java:420) [graylog.jar:?]
#         at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:236) [graylog.jar:?]
#         at org.graylog2.bootstrap.Main.main(Main.java:45) [graylog.jar:?]
# Caused by: java.lang.reflect.InvocationTargetException
#         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_302]
#         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_302]
#         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_302]
#         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_302]
#         at com.github.joschi.jadconfig.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:53) ~[graylog.jar:?]
#         at com.github.joschi.jadconfig.JadConfig.invokeValidatorMethods(JadConfig.java:221) ~[graylog.jar:?]
#         ... 4 more
# Caused by: java.lang.IllegalArgumentException: URLDecoder: Illegal hex characters in escape (%) pattern - For input string: "!s"
#         at java.net.URLDecoder.decode(URLDecoder.java:194) ~[?:1.8.0_302]
#         at com.mongodb.ConnectionString.urldecode(ConnectionString.java:1035) ~[graylog.jar:?]
#         at com.mongodb.ConnectionString.urldecode(ConnectionString.java:1030) ~[graylog.jar:?]
#         at com.mongodb.ConnectionString.<init>(ConnectionString.java:336) ~[graylog.jar:?]
#         at com.mongodb.MongoClientURI.<init>(MongoClientURI.java:256) ~[graylog.jar:?]
#         at org.graylog2.configuration.MongoDbConfiguration.getMongoClientURI(MongoDbConfiguration.java:59) ~[graylog.jar:?]
#         at org.graylog2.configuration.MongoDbConfiguration.validate(MongoDbConfiguration.java:64) ~[graylog.jar:?]
#         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_302]
#         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_302]
#         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_302]
#         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_302]
#         at com.github.joschi.jadconfig.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:53) ~[graylog.jar:?]
#         at com.github.joschi.jadconfig.JadConfig.invokeValidatorMethods(JadConfig.java:221) ~[graylog.jar:?]

How to Setup Dynamic DNS with Google Domains & Ubiquity EdgeRouter

Step 1: Set up Dynamic DNS

– Access Google Domains: https://domains.google.com/registrar/
– Click on the Manage button, next to your domain
– Click on DNS
– Scroll toward the bottom to click on Advanced Settings
– Click on Manage dynamic DNS
– Leave the hostname field blank, click on Save
– If this domain already has a record, click on Replace to proceed or Cancel to input a different sub-domain
– Click on the drop-down menu next to ‘Your domain has Dynamic DNS setup’
– Select View credentials to trigger a pop-up window
– Click on View to see the username and password generated for this domain
– Copy and paste the information into a notepad to be used in ‘Step 2’
– Select Close

Step 2: Configure EdgeRouter with Dynamic DNS

– Access the router: https://ip.address.of.router/#Services/DNS
– In section Dynamic DNS, click the Add Dynamic DNS Interface button
– Set these values:
  – Interface: eth0 (or WAN interface)
  – Web: <leave blank>
  – Web-skip: <leave blank>
  – Service: dyndns
  – Hostname: kimconnect.com (or the hostname that has been setup in step 1)
  – Login: {username copied in step 1}
  – Password: {password copied in step 1}
  – Protocol: dyndns2
  – Server: domains.google.com
– Click on Apply
– Click on Force Update to expect this message ‘The configuration has been applied successfully’

Linux: Creating Soft Links as Directories

Optional test: create a soft-link for directory as hard-links are not allowed

source=/nfs-share/linux03/docker/containers
destinationdirectory=/var/lib/docker
sudo mkdir -p $source
sudo ln -sfn $source $destinationdirectory

# Expected result:
# The -sfn will force overwrite if link already exists to avoid this error
# ln: failed to create symbolic link '/var/lib/docker/containers': File exists
# -n, --no-deference: treat LINK_NAME as a normal file if it is a symbolic link to a directory

This is fail-safe sequence to create a symlink of an existing directory toward a destination. In this example, the directory is being held by a process named docker. Thus, it’s necessary to stop that process > delete its directory > recreate the directory as a link toward a desired destination

# The below sequence would pre-empt this error:
# ln: /var/lib/docker/containers: cannot overwrite directory
sudo su
systemctl stop docker
directoryname=containers
source=/nfs-share/linux03/docker/$directoryname
destinationdirectory=/var/lib/docker
sudo mkdir -p $source
sudo rmdir $destinationdirectory/directoryname
sudo ln -sfn $source $destinationdirectory
systemctl start docker

Optional: how to remove a symlink

directoryname=containers
destinationdirectory=/var/lib/docker
rm $destinationdirectory/$directoryname

Should I Choose the Motherboard (Fake) Hardware Raid Or Software RAID?

Motherboard Hardware Raid:

- Advantages:
- OS Independence
- Easier to setup and use
- Disadvantages:
- Motherboard dependence. It may require an identical model and firmware to transfer RAID hard drives.
- Typically slower to boot

Software RAID:

- Advantages:
- Motherboard independence
- Portable toward systems with a similar operating system
- Disadvantages:
- Will not be able to rebuild RAID volumes that have been created via Hardware
- Dual boot OS'es will not recognize each other's software RAID volumes

Other considerations:
– Newer computers would have fast CPUs that enhances software RAID performance to be on par with hardware RAID.
– Linux users may have access to more advanced RAID config, such as RAID50

My answer: when it is about servers and non-dual boot setups, go for software RAID if an expensive dedicated hardware RAID card is not being an option. Motherboard (fake) RAID has no real advantages on newer computers.

Pihole Error: Tried 100 Times to Connect to FTL Server

Error Message:
DataTables warning: table id=all-queries - Tried 100 times to connect to FTL server, but never got proper reply. Please check Port and logs!
Resolution:

Case Standalone Linux OS Installation:

sudo service pihole-FTL stop
sudo mv /etc/pihole/pihole-FTL.db/etc/pihole/pihole-FTL-damaged.db
sudo service pihole-FTL start

Case Kubernetes Cluster:

k scale deployment --replicas=0 pihole
# wait a few seconds for existing pods to terminate
sudo mv /mnt/pathToNfsPihole/pihole-FTL.db /mnt/pathToNfsPihole/pihole-FTL.db.bad
k scale deployment --replicas=1 pihole

Linux: How to Create RSA Keys and

Step 1: Generate RSA Key
# Command to generate rsa key for Ubuntu Linux
ssh-keygen -t rsa -b 2048

There are 2 questions:
a. Enter file in which to save the key => press [enter] key to accept default ~/.ssh
b. Enter passphrase => create a password to access rsa key (recommended)

Step 2: Configure SSH Agent

a. Add SSH Key to ssh-agent

$ eval "$(ssh-agent -s)"
Agent pid 95765

b. Setup Bash to Load ssh-agent

# These lines are meant to generate a $HOME/.bashrc file
# Escape characters are added - please remove those if commands are to be extracted
# Add this to ~/.bash_profile on OSX/Linux or ~/.bashrc on Ubuntu
# Part of script comes from http://mah.everybody.org/docs/ssh
cat<< EOF > ~/.bashrc
#OR APPEND: cat<< EOF >> ~/.bashrc
echo "Loading ~/.bashrc ..."
SSH_ENV="/home/kim/.ssh/agent-environment"
function start_agent {
    echo "Initialising new SSH agent..."
    /usr/bin/ssh-agent | sed 's/^echo/#echo/' > "\${SSH_ENV}"
    echo succeeded
    chmod 600 "\${SSH_ENV}"
    . "\${SSH_ENV}" > /dev/null
    /usr/bin/ssh-add;
    echo "Key loaded: "
    /usr/bin/ssh-add -L;
}
# Source SSH settings, if applicable
#i=$(ps -eaf | grep -i ssh-agent |sed '/^$/d' | wc -l | bc -l) # check for service and cast output as integer
#if [ $i > 1 ]; then
#    echo "loading ssh key from its default location..."
#    /usr/bin/ssh-add;
if [ -f "\${SSH_ENV}" ]; then
    . "\${SSH_ENV}" > /dev/null
    ps -ef | grep \${SSH_AGENT_PID} | grep ssh-agent\$ > /dev/null || {
        start_agent;
    }
else
    start_agent;
fi
EOF

PowerShell: Query Google Account Using GAM

$emailAddress='someone@yourcompany.com'
$field='accounts:last_login_time'

function getGamUser{
    param(
        $emailAddress,
        $field
    )

    $result=try{gam report users user $emailAddress}catch{}
    if($result){
        $headers=$result[0] -split ','
        $index=$headers.IndexOf($field)
        return ($result[1] -split ',')[$index]
    }else{
        write-warning "$emailAddress field $field has not matched anything"
        return $null
    }
}

getGamUser $emailAddress $field

PowerShell: Download and Expand Zip File – Legacy Compatible

        function downloadFile{
            param(
                $url,
                $tempFolder="C:\Temp"
            )
            try{
                $fileName=split-path $url -leaf
                $tempFile = "$tempFolder\$fileName"
                try{[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12}catch{}
                New-Item -ItemType Directory -Force -Path $tempFolder
                $webclient = New-Object System.Net.WebClient
                $WebClient.DownloadFile($url,$tempFile)
                return $true
            }catch{
                write-warning $_
                return $false
            }
        }

        function expandZipfile($file,$destination){      
            $destination=if($destinatione){$destination}else{split-path $file -parent}
            try{
                Add-Type -AssemblyName System.IO.Compression.FileSystem
                [System.IO.Compression.ZipFile]::ExtractToDirectory($file, $destination)
            }catch{
                write-warning $_
            }
        }

            $downloaded=downloadFile "https://kimconnect.com/wp-content/uploads/2019/08/emcopy.zip"
            if($downloaded){
                $destinationFolder="C:\Windows\System32"
                expandZipfile $destinationFile -Destination $destinationFolder
            }

Converting from IOPS to MB/s

Conversion factors: from IOPS to MiB/s

Throughput is a function of Input/Output per second (IOPS) divided by (block size in bytes / 1024 bytes) x 64. Hence, larger block sizes will yield higher sustained data transfer.

clusterSize => multiplier
512 => 2048
1024 => 1024
4KiB => 256
8Kib => 128
16KiB => 64
23KiB => 32

Formulas
  • MBps = IOPS / (1048576/clusterSize) = (IOPS * 16) / clusterSize
  • IOPS = (1048576/clusterSize) * MBps = (MBps * clusterSize) / 16

System Center Virtual Machine Manager Errors and Resolutions

Error Message:

Error (415)
Agent installation failed copying 'C:\Program Files\Microsoft System Center\Virtual Machine Manager\agents\I386\IRV-VMM01\msiInstaller.exe' to '\\IRV-HYPERV06\ADMIN$\msiInstaller.exe'.
The specified network name is no longer available

Recommended Action
Ensure '\\IRV-HYPERV06\msiInstaller.exe' to '\\IRV-VMM01' is online and not blocked by a firewall.
Ensure that file and printer sharing is enabled on 'IRV-VMM01\msiInstaller.exe' to '\\IRV-HYPERV06' and it not blocked by a firewall.
Ensure that WMI is enabled on 'IRV-VMM01\msiInstaller.exe' to '\\IRV-HYPERV06' and it is not blocked by firewall.
Ensure that there is sufficient free space on the system volume.
Verify that the ADMIN$ share on 'IRV-VMM01\msiInstaller.exe' to '\\IRV-HYPERV06' exists. If the ADMIN$ share does not exist, restart 'IRV-VMM01\msiInstaller.exe' to '\\IRV-HYPERV06' and then try the operation again.

Fix:

Open firewall ports required by SCVMM:
TCP/22
TCP/80
TCP/135
TCP/139
TCP/445
TCP/443
TCP/623
TCP/1433
TCP/5985
TCP/5986
TCP/8530
TCP/8531
TCP/8100
TCP/8101
TCP/8102
TCP/49152 to TCP/65535

Source: https://docs.microsoft.com/en-us/system-center/vmm/plan-ports-protocols?view=sc-vmm-2019

Error:

Error (10421)
The specified user account cannot be the same as the VMM service account.

Fix:

Create a service account to be used to install VMM clients onto Hyper-V hosts. This new service account must be a member of the local Administrators group of the Hyper-V hosts.

PowerShell: Create User Accounts From CSV File

# User-input Variables
$csvFile='C:\Users-finalized.csv'
$newOu='CN=Users,DC=kimconnect,DC=com'
$newCompany='Kim Connect'
$logFile="c:\temp\createActiveDirectoryAccounts-$(get-date -f yyyy-mm-dd-hh-mm-ss).txt"

function createActiveDirectoryAccounts{
  param(
    $csvFile,
    $newOu,
    $newCompany,
    $logFile="c:\temp\createActiveDirectoryAccounts-$(get-date -f yyyy-mm-dd-hh-mm-ss).txt"
  )

  # Declare variables Log Files
  $failures = @()
  $usersAlreadyExist =@()
  $successes = @()
  $erroractionpreference = "Continue"

  write-host "Gathering Active Directory Users Information..."
  $existingUsers=Get-ADUser -Filter * -property SamAccountName,EmailAddress
  $users = Import-Csv -Path $csvFile
  $voidChars='#N/A','',$null
  $ou=if($newOu){$newOu}else{
    $domainLdapExpression='DC='+((($env:USERDNSDOMAIN).tolower() -split '\.') -join ',DC=');
    'CN=Users,'+$domainLdapExpression
  }
  $sortedRecords=$users|sort -property newSamAccountName

  write-host "Commencing users creation..."
  foreach ($user in $sortedRecords) {
    $userExists=$user.newSamAccountName -in $existingUsers.SamAccountName -or $user.newEmailAddress -in $existingUsers.EmailAddress
    if(!$userExists){
      $password = $user.newPassword | ConvertTo-SecureString -AsPlainText -Force
      $proxyAddresses = $user.newEmailAddress
      $streetAddress=if($user.'Street 2' -notin $voidChars){$user.'Street 1'+', '+$user.'Street 2'}elseif($user.'Street 1' -notin $voidChars){$user.'Street 1'}else{$null}
      $city =if($user.City -notin $voidChars){$user.City}else{$null}
      $state =if($user.State -notin $voidChars){$user.State}else{$null}
      $postalCode=if($user.PostalCode -notin $voidChars){$user.PostalCode}else{$null}
      $country=if($user.Country -notin $voidChars){$user.Country}else{$null}
      $jobTitle=if($user.Title -notin $voidChars){$user.Title}else{$null}
      $telephone=if($user.telephoneNumber -notin $voidChars){$user.telephoneNumber}else{$null}
      $extension=if($user.Extension -notin $voidChars){$user.Extension}else{$null}
      $displayName=$user.Surname+', '+$user.GivenName
      $newSamAccountName=$user.newSamAccountName
      $newUserPrincipleName=$newSamAccountName+'@'+$env:USERDNSDOMAIN
      # Generating a hash table as a splatting technique
      $params = @{        
        SamAccountName = $newSamAccountName;
        Path = $ou;
        Enabled = $true;        
        AccountPassword = $password;
        ChangePasswordAtLogon = $False;
        EmployeeID = $user.EmployeeID;
        Name = $displayName;
        GivenName = $user.GivenName;
        Surname = $user.Surname;
        DisplayName = $displayName;
        UserPrincipalName = $newUserPrincipleName;
        Initials = $user.Initials;
        Description = $user.Description;    
        Office = $user.Office;
        Title = $jobTitle
        # Manager = $user.newManagerDN
        Company = $newCompany;
        Department = $user.Department;
        Division = $user.Division;
        StreetAddress = $streetAddress;
        EmailAddress = $user.newEmailAddress;
        City = $city
        State = $state
        PostalCode = $postalCode
        Country = $country
        OfficePhone = $telephone
        OtherAttributes = @{ 
            IPPhone = $extension;
            #extensionAttribute2 = $($User.extensionAttribute2);
            #extensionAttribute3 = $($User.extensionAttribute3);
            #extensionAttribute4 = $($User.extensionAttribute4);
          }
        }
    
      # Removing empty values
      @($params.OtherAttributes.Keys)|%{if(-not $params.OtherAttributes[$_]){$params.OtherAttributes.Remove($_)}}
      $voidCount=(@($params.OtherAttributes.Keys) | % {if ($params.OtherAttributes[$_] -in $voidChars) {$_}}).Count
      if($params.OtherAttributes.Keys.count -eq $voidCount){$params.Remove('OtherAttributes')}
      @($params.Keys) | % {if ($null -eq $params[$_]) {$params.Remove($_)}}
      
      try{       
        # Creating the user account
        New-ADUser @params -PassThru #-Verbose      
        
        # Setting the Proxy Address as required for Office 365 and Microsoft Exchange integration
        If (-not [string]::IsNullOrWhiteSpace($proxyAddresses)){
            foreach( $proxyAddress in ( $proxyAddresses -split ';' ) ){
            write-host "adding $proxyAddress to user $newSamAccountName" -ForegroundColor Yellow
            Set-ADUser -Identity $($user.newSamAccountName) -Add @{proxyAddresses=$proxyAddresses}}
            }
        $successes+="$newSamAccountName with display name '$displayName' has been succesfully created"
      }catch{
        $failures+="$newSamAccountName creation has failed with error: $_"
      }
    }else{
      write-warning "$($user.newSamAccountName) already exists in $env:USERDNSDOMAIN"
      $usersAlreadyExist+="$($user.newSamAccountName) already exists"
    }
  }

  write-host "Updating Managers DNs..."
  foreach ($user in $sortedRecords) {
    try{      
      if($user.newManagerDN -notin $voidChars){
        Set-ADUser $user.newSamAccountName -Manager $user.newManagerDN
      }         
    }catch{
      $failures+="$($user.newSamAccountName) Manager DN update has failed with this error: $_"
    }
  }

  $divider='`r`n`r`n==============================================================`r`n`r`n'
  $logMessages=$failures+$divider+$usersAlreadyExist+$divider+$successes
  $logMessages | Out-File -FilePath $logFile
  write-host "Results have been logged at $logFile"
}

createActiveDirectoryAccounts $csvFile

PowerShell: Checking Duplicating Identifiers Among ADFS Relying Party Trusts

function getDuplicatingIfd{
  write-host "Checking each relying party trust for any duplicates of identifiers..."
  $trusts=Get-AdfsRelyingPartyTrust
  $allTrustNames=$trusts.Name
  $duplicates=@()
  [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 
  foreach ($trustName in $allTrustNames){
      write-host "Checking $trustName..." -NoNewline
      #$targetTrust=Get-AdfsRelyingPartyTrust $trustName
      $targetTrust=$trusts|?{$_.Name -eq $trustName}
      $metadataUrl=$targetTrust.MetadataUrl.AbsoluteUri
      try{
        $xml=Invoke-WebRequest -Uri $metadataUrl -Method:Get -ContentType "application/xml" -ErrorAction:Stop -TimeoutSec 60
      }catch{
        #write-warning $_
        $xml=$null
      }
      if($xml){
        $endPointReferences=([xml]$xml.Content).EntityDescriptor.RoleDescriptor.TargetScopes.EndpointReference|%{$_.Address}
     
        # $targetIdentifiers=$targetTrust.Identifier # This only returns the existing IFD's that may not have been synchronized
        # $otherTrustNames=$allTrustNames|?{$_ -ne $trustName}
        # $otherTrusts=Get-AdfsRelyingPartyTrust $otherTrustNames
        $otherTrusts=$trusts|?{$_.Name -ne $trustName}
        $otherIdentifiers=$otherTrusts.Identifier
        #$duplicateIdentifiers=$targetIdentifiers|?{$_ -in $otherIdentifiers}
        $duplicateIdentifiers=$endPointReferences|?{$_ -in $otherIdentifiers}
        if($duplicateIdentifiers){
            write-host "$trustName has these duplicate identifiers"
            foreach ($duplicate in $duplicateIdentifiers){
                $duplicateTrust=$otherTrusts|?{$duplicate -in $_.Identifier}
                if($duplicateTrust){
                    write-host "$duplicate in '$trustName' and '$($duplicateTrust.Name)'"
                    $duplicates+=[PSCustomObject][ordered]@{
                      duplicateIdentifier=$duplicate;
                      offendingRelyingPartyTrust=$trustName;
                      defendingRelyingPartyTrust=$duplicateTrust.Name
                    }
                }
            }
        }else{
            write-host " no duplicates..."
        }
      }else{
        write-warning "$trustName is skipped."
      }
    sleep 1      
  }
  return $duplicates 
}

getDuplicatingIfd

Question: what problem does this solve?

Answer: this is a tool to investigate root cause leading to these errors:

Error - AD FS Management
An error occured during an attempt to access the AD FS configuration database:
Error message: MSIS7612: Each identifier for a relying party trust must be unique across all relying party trusts in AD FS configuration.
Protocol Name: 
Relying Party:
Exception details:
Microsoft.IdentityServer.RequestFailedException: MSIS7065: There are no registered protocol handlers on path /adfs/ls/ to process the incoming request.
at Microsoft.IdentityServer.Web.PassiveProtocolListener.OnGetContext(WrappedHttpListenerContext context)
Encountered error during federation passive request. 
Additional Data
Protocol Name:
wsfed
Relying Party: https://testcrm.kimconnect.com/ Exception details:
Microsoft.IdentityServer.Web.InvalidScopeException: MSIS7007: The requested relying party trust 'https://testcrm.kimconnect.com/'; is unspecified or unsupported. If a relying party trust was specified, it is possible that you do not have permission to access the trust relying party. Contact your administrator for details.
at Microsoft.IdentityServer.Web.Protocols.WSFederation.WSFederationSignInContext.ValidateCore()
at Microsoft.IdentityServer.Web.Protocols.ProtocolContext.Validate()
at Microsoft.IdentityServer.Web.Protocols.WSFederation.WSFederationProtocolHandler.GetRequiredPipelineBehaviors(ProtocolContext pContext)
at Microsoft.IdentityServer.Web.PassiveProtocolListener.EvaluateHomeRealm(PassiveProtocolHandler protocolHandler, ProtocolContext protocolContext)
at Microsoft.IdentityServer.Web.PassiveProtocolListener.OnGetContext(WrappedHttpListenerContext context)

PowerShell: Hyper-V Servers Capacity Report

# hyperVCapacityReport.ps1
# Version: 0.1

# Report parameters
$workingDirectory='C:\scripts\hyperVReports'
$selectFields='node,model,os,cpuType,sockets,cores,ramGb,ramUsedPercent,vmsCount'
$domainObjects=@(
    @{domain='kimconnect.com';dc='dc2.kimconnect.com';username='testAdmin';password='password'}    
    )

# Update this Google Sheets
$spreadsheetID='abcdefghijklmnopqstvwzyz1234567890'
$spreadsheetUrl='https://docs.google.com/spreadsheets/d/'+$spreadsheetID

# Google API Authozation
$scope = "https://www.googleapis.com/auth/spreadsheets https://www.googleapis.com/auth/drive https://www.googleapis.com/auth/drive.file"
$certPath = 'C:\scripts\googleSheets\googleApiCert12345.p12'
$iss = 'googlesheets@googleApi.iam.gserviceaccount.com'
$certPassword = 'notasecret'

# Email parameters
$emailFrom='sysadmins@kimconnect.com'
$emailTo='sysadmins@kimconnect.com'
$subject='Hyper-V Hosts Capacity Report'
$smtpRelayServer='smtprelay.kimconnect.com'
$emailDay='Monday'

function getAllHyperVClusters($domainObjects){
    function getHyperVHosts($domainObjects){
        function includeRSAT{
            $ErrorActionPreference='stop'
            [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
            #$rsatWindows7x32='https://download.microsoft.com/download/4/F/7/4F71806A-1C56-4EF2-9B4F-9870C4CFD2EE/Windows6.1-KB958830-x86-RefreshPkg.msu'
            $rsatWindows7x64='https://download.microsoft.com/download/4/F/7/4F71806A-1C56-4EF2-9B4F-9870C4CFD2EE/Windows6.1-KB958830-x64-RefreshPkg.msu'
            $rsatWindows81='https://download.microsoft.com/download/1/8/E/18EA4843-C596-4542-9236-DE46F780806E/Windows8.1-KB2693643-x64.msu'
            $rsat1709 = "https://download.microsoft.com/download/1/D/8/1D8B5022-5477-4B9A-8104-6A71FF9D98AB/WindowsTH-RSAT_WS_1709-x64.msu"
            $rsat1803 = "https://download.microsoft.com/download/1/D/8/1D8B5022-5477-4B9A-8104-6A71FF9D98AB/WindowsTH-RSAT_WS_1803-x64.msu"
            $rsatWs2016 = "https://download.microsoft.com/download/1/D/8/1D8B5022-5477-4B9A-8104-6A71FF9D98AB/WindowsTH-RSAT_WS2016-x64.msu"
    
            # This command does not work on Windows 2012R2
            #$releaseId=(Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion" -Name ReleaseId).ReleaseId
            #Get-ItemProperty : Property ReleaseId does not exist at path HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
            #NT\CurrentVersion.
            #At line:1 char:2
            #+ (Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion" -Na ...
            #+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            #    + CategoryInfo          : InvalidArgument: (ReleaseId:String) [Get-ItemProperty], PSArgumentException
            #    + FullyQualifiedErrorId : System.Management.Automation.PSArgumentException,Microsoft.PowerShell.Commands.GetItemPropertyCommand
    
            $releaseId=(Get-Item "HKLM:SOFTWARE\Microsoft\Windows NT\CurrentVersion").GetValue('ReleaseID')
            $osVersion=[System.Environment]::OSVersion.Version
            [double]$osVersionMajorMinor="$($osVersion.Major).$($osVersion.Minor)" 
            $osName=(Get-WmiObject Win32_OperatingSystem).Name
            #$osType=switch ((Get-CimInstance -ClassName Win32_OperatingSystem).ProductType){
            #    1 {'client'}
            #    2 {'domaincontroller'}
            #    3 {'memberserver'}
            #    }
    
            $windowsVersion=(Get-CimInstance Win32_OperatingSystem).Version
    
            switch ($releaseId){
                1607{write-host 'Windows Server 2016 Release 1607 detected';$link=$rsatWs2016;break}
                1709{write-host 'Windows Server 2016 Release 1709 detected';$link=$rsat1709;break}
                1803{write-host 'Windows Server 2016 Release 1803 detected';$link=$rsat1803}
            }
        
            switch ($osVersionMajorMinor){
                {$_ -eq 6.0}{write-host 'Windows Server 2008 or Windows Vista detected';$link=$rsat1709;break}
                {$_ -eq 6.1}{write-host 'Windows Server 2008 R2 or Windows 7 detected';$link=$rsatWindows7x64;break}
                {$_ -eq 6.2}{write-host 'Windows Server 2012 or Windows 8.1 detected';$link=$rsatWindows81;break}
                {$_ -eq 6.3}{write-host 'Windows Server 2012 R2 detected';$link=$rsatWindows81}
            }
    
            if (!(Get-Module -ListAvailable -Name ActiveDirectory -EA SilentlyContinue)){
                Write-host "Prerequisite checks: module ActiveDirectory NOT currently available on this system. Please wait while the program adds that plugin..."
                try{
                    # If OS is Windows Server, then install RSAT using a different method
                    if ($osName -match "^Microsoft Windows Server") {
                        # This sequence has confirmed to be valid on Windows Server 2008 R2 and above
                        Write-Verbose "Importing Windows Feature: RSAT-AD-PowerShell"
                        Import-Module ServerManager
                        Add-WindowsFeature RSAT-AD-PowerShell
                        }
                    else{
                        Write-Verbose "This sequence targets Windows Client versions"
                        $destinationFile= ($ENV:USERPROFILE) + "\Downloads\" + (split-path $link -leaf)
                        Write-Host "Downloading RSAT from $link..."
                        Start-BitsTransfer -Source $link -Destination $destinationFile
                        $fileCheck=Get-AuthenticodeSignature $destinationFile
                        if($fileCheck.status -ne "valid") {write-host "$destinationFile is not valid. Please try again...";break}
                        $wusaCommand = $destinationFile + " /quiet"
                        Write-host "Installing RSAT - please wait..."
                        Start-Process -FilePath "C:\Windows\System32\wusa.exe" -ArgumentList $wusaCommand -Wait
                        }
                    return $true
                    }
                catch{
                    write-warning "$($error[0].Exception)"
                    return $false
                    }
            }else{
                Write-host "Prerequisite checks: module ActiveDirectory IS currently available on this system." -ForegroundColor Green
                return $true
                }
        }
        function listAllHyperVNodes($domainObjects){
            try{
                #if(!$domains){$domains=(Get-ADForest).Name|%{(Get-ADForest -Identity $_).Name}}
                $allHyperVNodes=@()
                foreach ($domainObject in $domainObjects){
                    #[string]$dc=(get-addomaincontroller -DomainName "$domain" -Discover -NextClosestSite).HostName
                    $domain=$domainObject.domain
                    $username=$domainObject.username
                    $password=$domainObject.password
                    $encryptedPassword=ConvertTo-securestring $password -AsPlainText -Force
                    $credential=New-Object -TypeName System.Management.Automation.PSCredential -Args $username,$encryptedPassword
                    $dc=if(!($domainObject.dc)){
                        $domainNode=.{try{get-addomain -Server $domain}catch{$false}}
                        if (!$domainNode){return $null}
                        if($domainNode.ReplicaDirectoryServers[1]){
                                $domainNode.ReplicaDirectoryServers[1]
                            }else{
                                $domainNode.PDCEmulator
                            }
                    }else{
                        $domainObject.dc
                    }
                    $session=.{
                        try{
                            new-pssession -computername $dc -credential $credential -EA Stop
                            write-host "Connected to $dc..."
                        }catch{
                            write-warning $_
                            return $false
                        }
                    }
                    if($session){
                        write-host "Collecting all Hyper-V Clusters in $domain. This may take a while, depending on cluster sizes."
                        $allClusters=.{              
                            $clusters=invoke-command -session $session -scriptblock{
                                $rsatClusteringPowershell=get-WindowsFeature RSAT-Clustering-PowerShell
                                if(!$rsatClusteringPowershell.Installed){Add-WindowsFeature RSAT-Clustering-PowerShell}
                                (get-cluster -domain $env:USERDNSDOMAIN).Name
                            }
                            return $clusters
                        }                
                        foreach ($cluster in $allClusters){
                            write-host "Checking $cluster"
                            try{
                                $nodes=invoke-command -computername "$cluster.$domain" -credential $credential -scriptblock{
                                    param($clustername)
                                    #$rsatClusteringPowershell=get-WindowsFeature RSAT-Clustering-PowerShell
                                    #if(!$rsatClusteringPowershell.Installed){Add-WindowsFeature RSAT-Clustering-PowerShell}
                                    $x=Get-ClusterNode -Cluster $clustername -ea SilentlyContinue
                                    if($x){
                                        $x|Where-Object{$_.State -eq 'Up'}|Select-Object Name,@{name='Cluster';e={"$clustername.$env:USERDNSDOMAIN"}}
                                    }else{
                                        $false
                                    }
                                } -Args $cluster -EA Stop|select Name,Cluster
                                if($nodes){$allHyperVNodes+=$nodes} 
                            }catch{
                                write-warning "$cluster is skipped..."
                            }                   
                        }
                        Remove-PSSession $session
                    }else{
                        write-warning "$env:computername cannot connect to $dc..."
                    }
                }
                return $allHyperVNodes
            }catch{
                Write-Error $_
                return $false
                }
        }

        try{
            #$null=includeRSAT;
            #$rsatClusteringPowershell=get-WindowsFeature RSAT-Clustering-PowerShell
            #if(!$rsatClusteringPowershell.Installed){Add-WindowsFeature RSAT-Clustering-PowerShell}                                    
            $hyperVHosts=listAllHyperVNodes $domainObjects
            $hyperVHostNames=$hyperVHosts|sort -property Cluster
            return $hyperVHostNames
        }catch{
            Write-Error $_
            return $false
            }
    }
    function sortArrayStringAsNumbers([string[]]$names){
        $hashTable=@{}
        $maxLength=($names | Measure-Object -Maximum -Property Length).Maximum
        foreach ($name in $names){
            #[int]$x=.{[void]($name -match '(?:.(\d+))+$');$matches[1]}
            #$x=.{[void]($name -match '(?:.(\d+)+)$');@($name.substring(0,$name.length-$matches[1].length),$matches[1])}
            $originalName=$name
            $x=.{
                [void]($name -match '(?:.(\d+)+)\w{0,}$');
                if($matches){
                    [int]$trailingNonDigits=([regex]::match($name,'\D+$').value).length
                    if($trailingNonDigits){
                        $name=$name.substring(0,$name.length-$trailingNonDigits)
                    }
                    return ($name.substring(0,$name.length-$matches[1].length))+$matches[1].PadLeft($maxLength,'0');
                }else{
                    return $name+''.PadLeft($maxLength,'0');
                }}
            $hashTable.Add($originalName,$x)
            }
        $sorted=foreach($item in $hashTable.GetEnumerator() | Sort Value){$item.Name}
        return $sorted
    }

    #write-host "Obtaining cluster names and associated hosts..."
    $hyperVHostsInForest=getHyperVHosts $domainObjects
    $sortedArray=@()
    $clusters=$hyperVHostsInForest|Group-Object -Property Cluster
    foreach($cluster in $clusters){
        $clusterName=$cluster.Name
        write-host $clusterName       
        $sortedHostnames=sortArrayStringAsNumbers $cluster.Group.Name
        $sortedHostnames|%{$sortedArray+=New-Object -TypeName psobject -Property @{hostname=$_; cluster=$clusterName}}
    }
    return $sortedArray
}
function getQuickStats($computername=$env:computername){    
    try{
        # Server Model
        $biosRegKey="REGISTRY::HKEY_LOCAL_MACHINE\Hardware\Description\System\Bios"
        $bios=Get-ItemProperty $biosRegKey
        $model=$bios.SystemProductName

        # RAM (IN GB)
        $ramInfo=get-wmiobject win32_physicalmemory|Select-Object *
        $ramInfo=get-CimInstance win32_physicalmemory|Select-Object *
        $ramGb=.{$sum=0;$ramInfo.Capacity|%{ $sum += $_};return $sum/1GB}

        # RAM Module size
        #$ramModuleSize=($ramInfo.Capacity|select -unique)|%{$_/1GB}
        $ramGroups=$ramInfo.Capacity|Group|select Count,@{name='ramSize';e={($_.Group|select -Unique)/1GB}}
        $ramModuleSize=$ramGroups|%{"($($_.Count)) $($_.ramSize)"}

        # Ram Speed (Mhz)
        # $ramSpeed=$ramInfo.Speed|select -unique
        $ramSpeedGroups=$ramInfo.Speed|Group|select Count,@{name='ramSize';e={$_.Group|select -Unique}}
        $ramSpeed=$ramSpeedGroups|%{"($($_.Count)) $($_.ramSize)"}

        # ramConfiguredClockSpeed
        #$ramConfiguredSpeed=$ramInfo.ConfiguredClockSpeed|select -unique
        $ramConfigSpeedGroups=$ramInfo.ConfiguredClockSpeed|Group|select Count,@{name='ramSize';e={$_.Group|select -Unique}}
        $ramConfiguredSpeed=$ramConfigSpeedGroups|%{"($($_.Count)) $($_.ramSize)"}
        
        # RAM in use
        $osObject=get-wmiobject Win32_OperatingSystem
        $os=$osObject.Caption
        $ramPercentFree=$osObject.FreePhysicalMemory/$osObject.TotalVisibleMemorySize
        $ramUsed=[math]::round($ramGb-($ramPercentFree*$ramGb),2)    

        # Slots filled
        $ramSlotsFilled=($ramInfo|?{$_.Capacity}).count

        # Slots available
        $ramSlotsAvailable=$ramInfo.count
        # Cores
        $cpu=Get-CimInstance Win32_processor
        $cores=.{$total=0;$cpu.NumberOfCores|%{$total+=$_};return $total}

        # Proc count
        $sockets=$cpu.count

        # Cpu Type
        $cpuType=$cpu.Name|select -unique

        # Clock speed
        $cpuClock=($cpu.MaxClockSpeed|Measure-Object -Average).Average
        $cpuBandwidth=$cpuClock*$cores
        $cpuUtilizationPercent=($cpu|Measure-Object -property LoadPercentage -Average).Average
        $cpuUtilization=($cpuUtilizationPercent/100)*$cpuBandwidth

        #$cpuUtilizationPercent=((Get-WmiObject -Class Win32_PerfFormattedData_PerfOS_Processor).PercentProcessorTime|measure-object -Average).Average  
        # $cpuUtilizationPercent=.{#$rawValues=wmic cpu get loadpercentage
        #     $rawValues=$cpu.LoadPercentage
        #     $selectNumbers=$rawValues|?{$_ -match '\d+'}
        #     ($selectNumbers|Measure-Object -Average).Average
        # }

        # $cpuUtilizationPercent=(get-counter -Counter "\Processor(*)\% Processor Time" -SampleInterval 1 -MaxSamples 2 |`
        #     select -ExpandProperty countersamples | select -ExpandProperty cookedvalue | Measure-Object -Average).average
        
        # $cpuUtilization=((Get-process|Select-Object CPU).CPU|Measure-Object -sum).Sum
        # $cpuUtilizationPercent=($cpuUtilization/$cpuBandwidth)*100

        # $cpuProperties=@(
        #     @{Name="processName"; Expression = {$_.name}},
        #     @{Name="cpuPercent"; Expression = {$_.PercentProcessorTime}},    
        #     @{Name="MemoryGb"; Expression = {[Math]::Round(($_.workingSetPrivate / 1GB),2)}}
        # )
        # $totalCpuUsage=(Get-WmiObject -class Win32_PerfFormattedData_PerfProc_Process | 
        #     Select-Object $cpuProperties)|?{$_.'processName' -eq '_Total'}

        # Other methods not being used
        #$cs = Get-WmiObject -class Win32_ComputerSystem
        #$Sockets=$cs.numberofprocessors
        #$Cores=$cs.numberoflogicalprocessors

        $guestVms = Get-VM|Where-Object {$_.State -eq 'Running'}
        $vmsCount = $guestVms.count
    }catch{}  
    return [PSCustomObject][ordered]@{
        node=$computername        
        model=$model
        cpuType=$cpuType
        sockets=$sockets
        cores=$cores
        cpuBandwidth=$cpuBandwidth
        cpuUtilization=[math]::round($cpuUtilization,2)
        cpuUtilizationPercent=[math]::round($cpuUtilizationPercent,2)
        ramGb=$ramGb
        ramUsedGb=[math]::round($ramUsed,2)
        ramUsedPercent=[math]::round(($ramUsed/$ramGb)*100,2)
        ramModuleSizeGb=$ramModuleSize
        ramSlotsAvailable=$ramSlotsAvailable
        ramSlotsFilled=$ramSlotsFilled
        ramSpeed=$ramSpeed
        ramConfigSpeed=$ramConfiguredSpeed
        vmsCount=$vmsCount
        os=$os
    }    
}
function checkSystem($computername=$env:computername,$credential){ 
    $session=if($credential){
            New-PSSession $computername -credential $credential
        }else{
            New-PSSession $computername
        }
    if($session.State -eq 'Opened'){        
        $quickStats=invoke-command -session $session -scriptblock{
                param($getQuickStats)
                [scriptblock]::create($getQuickStats).invoke()
            } -Args ${function:getQuickStats}
        Remove-PSSession $session
    }else{
        Write-Warning "$computername is not Reachable from $env:computername via WinRM"
        $quickStats=$false        
    }
    if($quickStats){
        return $quickStats|Select-Object -Property * -ExcludeProperty PSComputerName,RunspaceId,PSShowComputerName
    }else{
        return [PSCustomObject]@{
            computerName = $computername+' Unreachable'
        }
    }
}
function generateEmailContent{
    param(
        $arrayObjects,
        $selectFields='node,model,cpuType,sockets,cores,cpuBandwidth,cpuUtilizationPercent,ramGb,ramUsedPercent',
        $reportName,
        $summary,
        $css="
        <style>
        .h1 {
            font-size: 18px;
            height: 40px;
            padding-top: 80px;
            margin: auto;
            text-align: center;
        }
        .h5 {
            font-size: 22px;
            text-align: center;
        }
        .th {text-align: center;}
        .table {
            padding:7px;
            border:#4e95f4 1px solid;
            background-color: white;
            margin-left: auto;
            margin-right: auto;
            width: 100%
            }
        .colgroup {}
        .th { background: #0046c3; color: #fff; padding: 5px 10px; }
        .td { font-size: 11px; padding: 5px 20px; color: #000;
            width: 1px;
            white-space: pre;
            }
        .tr { background: #b8d1f3;}
        .tr:nth-child(even) {
            background: #dae5f4;
            width: 1%;
            white-space: nowrap
        }
        .tr:nth-child(odd) {
            background: #b8d1f3;
            width: 1%;
            white-space: nowrap
        }
        </style>
        "
        )
    $filteredArray=invoke-expression "`$arrayObjects|select-object $selectFields"
    $report=$filteredArray|ConvertTo-Html -Fragment|Out-String
    $reportHtml=$report -replace '\<(?<item>\w+)\>', '<${item} class=''${item}''>'
    $emailContent='<html><head>'+$css+"</head><body><h1>$reportName</h1>`n<h5>$summary</h5>"+$reportHtml+'</body></html>'
    return $emailContent
}

function generateHtmlFragment($arrayObjects,$selectFields,$reportName,$summary){
    $filteredArray=invoke-expression "`$arrayObjects|select-object $selectFields"
    $htmlFragment=$filteredArray|ConvertTo-Html -Fragment|Out-String
    $customHtml=$htmlFragment -replace '\<(?<item>\w+)\>', '<${item} class=''${item}''>'
    $customHtmlFragment="<h1>$reportName</h1>`n<h5>$summary</h5>"+$customHtml
    return $customHtmlFragment    
}

function packageHtmlEmailContent{
    param(
        $htmlBody,
        $footer,
        $css="
        <style>
        .h1 {
            font-size: 18px;
            height: 40px;
            padding-top: 80px;
            margin: auto;
            text-align: center;
        }
        .h5 {
            font-size: 22px;
            text-align: center;
        }
        .th {text-align: center;}
        .table {
            padding:7px;
            border:#4e95f4 1px solid;
            background-color: white;
            margin-left: auto;
            margin-right: auto;
            width: 100%
            }
        .colgroup {}
        .th { background: #0046c3; color: #fff; padding: 5px 10px; }
        .td { font-size: 11px; padding: 5px 20px; color: #000;
            width: 1px;
            white-space: pre;
            }
        .tr { background: #b8d1f3;}
        .tr:nth-child(even) {
            background: #dae5f4;
            width: 1%;
            white-space: nowrap
        }
        .tr:nth-child(odd) {
            background: #b8d1f3;
            width: 1%;
            white-space: nowrap
        }
        </style>
        "
        )
    $emailContent='<html><head>'+$css+"</head><body>$htmlBody"+$footer+'</body></html>'
    return $emailContent         
}
function emailReport{
    param(
        $emailFrom,
        $emailTo,
        $subject,
        $emailContent,
        $smtpRelayServer
    )
    Send-MailMessage -From $emailFrom `
    -To $emailTo `
    -Subject $subject `
    -Body $emailContent `
    -BodyAsHtml `
    -SmtpServer $smtpRelayServer    
}

function updateGoogleSheets{
    param(
        $spreadsheetID,
        $sheetName=$env:USERDNSDOMAIN,
        $reportCsv,
        $certPath,
        $certPassword='notasecret',
        $iss,
        $scope="https://www.googleapis.com/auth/spreadsheets https://www.googleapis.com/auth/drive https://www.googleapis.com/auth/drive.file"
    )
    # Set security protocol to TLS 1.2 to avoid TLS errors
    [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
    Import-Module UMN-Google

    # obtain token if necessary
    $accessToken = Get-GOAuthTokenService -scope $scope -certPath $certPath -certPswd $certPassword -iss $iss
    # Create new sheet, if not already exists
    try{Add-GSheetSheet -accessToken $accessToken -sheetName $sheetName -spreadSheetID $spreadsheetID}catch{}

    # Upload CSV data to Google Sheets with Set-GSheetData
    $inputCsv = Import-Csv $reportCsv
    $header = $inputCsv[0].psobject.properties.name|select -first 26 # Google default 26 columns max
    $import = new-Object System.Collections.ArrayList
    $import.Add($header) | Out-Null
    $inputCsv | ForEach-Object {
    $row=@(foreach($label in $header){$_."$($label)"})
    $import.Add($row) | Out-Null
    }
    $import.Add(@()) | Out-Null
    $import.Add(@( $summary -split '\r?\n'|%{$_.Trim('<br>$')}|out-string )) | Out-Null
    $columnsCount=$header.count
    $columnLetter=if($columnsCount -le 25){
            [char](65+$columnsCount)
        }else{
            'Z'
        }
    $range="A1:$columnLetter$($import.Count)"
    try{
        Set-GSheetData -accessToken $accessToken `
            -rangeA1 $range `
            -sheetName $sheetName `
            -spreadSheetID $spreadsheetID `
            -values $import #-Debug -Verbose            
        return $true
    }catch{
        write-warning $_
        return $false
    }
}

$hyperVClusters=getAllHyperVClusters $domainObjects
$clusterGroups=$hyperVClusters|Group-Object -Property cluster
$emailFragments=''
foreach($group in $clusterGroups){
    $clusterName=$group.Name
    $domain=[regex]::matches($clusterName,'\.(.*)').captures.groups[1].value
    $username=($domainObjects|?{$_.domain.toupper() -eq $domain.toupper()}).username
    $password=($domainObjects|?{$_.domain.toupper() -eq $domain.toupper()}).password
    $encryptedPassword=ConvertTo-securestring $password -AsPlainText -Force
    $credential=New-Object -TypeName System.Management.Automation.PSCredential -Args $username,$encryptedPassword
    write-host $clusterName
    $timeStamp=(get-date).tostring()
    $results=@()
    $hyperVHosts=$group.Group.hostname
    $hostCount=$hyperVHosts.Count
    for ($i=0;$i -lt $hostCount;$i++){
        write-host "Scanning $($i+1) of $hostCount`: $($hyperVHosts[$i])..."
        $results+=checkSystem "$($hyperVHosts[$i]).$domain" $credential
    }
    $totalRam=.{$total=0;$results.ramGb|%{$total+=$_};$total}
    $usedRam=.{$total=0;$results.ramUsedGb|%{$total+=$_};$total}
    $usedRamPercent=[math]::round($usedRam/$totalRam*100,2)
    $totalCpuBandwidth=.{$total=0;$results.cpuBandwidth|%{$total+=$_};$total}
    $usedCpuBandwidth=.{$total=0;$results.cpuUtilization|%{$total+=$_};$total}
    $usedCpuPercent=[math]::round($usedCpuBandwidth/$totalCpuBandwidth*100,2)
    $summary="Data Timestamp: $timeStamp<br>
Hosts Count: $($hyperVHosts.count)<br>
Total CPU Bandwidth: $([math]::round($totalCpuBandwidth/1000)) Ghz<br>
Used CPU Percent: $usedCpuPercent %<br>
Total RAM: $totalRam GB<br>
Used RAM: $usedRam GB<br>
Used RAM Percent: $usedRamPercent %<br>
Available RAM: $usedRam GB<br>"
    $reportName="Hyper-V Capacity Report for Cluster $clustername"
    $emailFragment=generateHtmlFragment $results $selectFields $reportName $summary
    $emailFragments+=$emailFragment
    $reportCsvPath=join-path $workingDirectory "$clustername.csv"
    $null=$results|export-csv $reportCsvPath -NoTypeInformation   
    updateGoogleSheets $spreadsheetID $clusterName $reportCsvPath $certPath $certPassword $iss $scope
    }

$today=(get-date).DayOfWeek
if ($today -eq $emailDay){
    $footer="<br>GoogleSheets URL: $spreadsheetUrl<br>"
    $emailContent=packageHtmlEmailContent $emailFragments $footer
    emailReport $emailFrom $emailTo $subject $emailContent $smtpRelayServer
}

PowerShell: Update CSV File Using Active Directory

# adAccountsCsvUpdate.ps1

$originalCsv='C:\Users\rambo\Desktop\kimconnectUsers.csv'
$newCsv='C:\Users\rambo\Desktop\kimconnectUsers-processed.csv'
$newEmailSuffix='@kimconnect.com'
$newOu='OU=Test,DC=kimconnect,DC=com'

function adAccountsCsvUpdate{
  param(
    $originalCsv,
    $newCsv,
    $newEmailSuffix,
    $newOu
  )

  function generateRandomPassword{
    param(
        $minLength = 10,
        $maxLength = 16,
        $nonAlphaChars = 2,
        $excludeRegex='[:\$\%\&\,]',
        $replaceExclusionWith=@(';','!','/','{','^','+','-','*','_')
    )
    add-type -AssemblyName System.Web
    $randomLength = Get-Random -Minimum $minLength -Maximum $maxLength   
    $randomPassword = [System.Web.Security.Membership]::GeneratePassword($randomLength, $nonAlphaChars)
    $sanitizedPassword = $randomPassword -replace $excludeRegex,"$(Get-Random -InputObject $replaceExclusionWith)"
    $fixedRepeating = .{$rebuiltString=''
                        for ($i=0;$i -lt $sanitizedPassword.length;$i++){
                        $previousChar=$sanitizedPassword[$i-1]
                        $thisChar=$sanitizedPassword[$i]
                        $nextChar=$sanitizedPassword[$i+1]
                        if($thisChar -eq $nextChar){
                            do{
                                $regenChar=[char](Get-Random (65..122) )
                                }until($regenChar -ne $previousChar -and $regenChar -ne $nextChar)
                            $rebuiltString+=$regenChar
                            }
                        else{$rebuiltString+=$thisChar}
                        }
                        return $rebuiltString
                        }
                             
    return $fixedRepeating
  }

  $csvContents=import-csv $originalCsv
  write-host "Pulling existing records from Active Directory of $env:USERDNSDOMAIN..."
  $allExistingUsers=get-aduser -Filter * -property SamAccountName,GivenName,sn,EmailAddress,Department,Description,telephoneNumber,Title,Manager,ManagedBy,City,State,postalCode,Enabled

  write-host "First pass: newSamAccountName"
  $firstPass=@()
  $count=$csvContents.count
  $itemIndex=0
  foreach ($row in $csvContents){
    $samAccountName=$row.SamAccountName
    $firstName=$row.GivenName
    $lastName=$row.Surname
    #$userPrincipalName=$row.UserPrincipalName
    $itemIndex++
    write-host "Processing $itemIndex of $count`: $samAccountName..."
    $newSamAccountName=.{
      # Default: return NULL if account already exists
      # $matchedEmail=$allExistingUsers|?{$_.EmailAddress -eq $userPrincipalName}
      # if($matchedEmail){
      #   return $null
      # }

      # Method 1: check to determine whether there are not duplicating records
      $matchedSam=$allExistingUsers|?{$_.SamAccountName -eq $samAccountName}
      if(!$matchedSam){
        return $samAccountName
      }
      # Method 1: testing firstname initials + lastname combinations
      for ($i=0;$i -lt $firstName.length;$i++){
        $testUsername=($firstName[0..$i] -join '')+$lastName
        if($testUserName -notin $allExistingUsers.SamAccountName){
          return $testUsername
        }
      }
      # Method 2: incrementing the username by a single digit
      for($i=1;$i -lt 11;$i++){
        $testUsername2=$samAccountName+$i
        if($testUserName2 -notin $allExistingUsers.SamAccountName){
          return $testUsername2
        }
      }
    }
    if($newSamAccountName -ne $samAccountName){
      write-host "SAM in CSV $samAccountName shall be updated as $newSamAccountName"
    }  
    $firstPass+=$row|select-object *,@{Name='newSamAccountName';Expression={$newSamAccountName}}
  }

  write-host "Second pass: newManagerSamAccount"
  $secondPass=@()
  foreach ($row in $firstPass){
    $manager=$row.Manager
    $firstName=[regex]::match($manager,'^(.+)\s(.+)').groups[1].Value
    $lastName=[regex]::match($manager,'^(.+)\s(.+)').groups[2].Value
    $matchedManagerSam=$firstPass|?{$_.GivenName -eq $firstName -and $_.Surname -eq $lastName}
    $newManagerSamAccount=.{      
      if($matchedManagerSam){
        return $matchedManagerSam.newSamAccountName
      }else{
        return $null
      }
    }
    if($newManagerSamAccount -ne $manager){
      write-host "Manager in CSV '$manager' shall be updated as '$newManagerSamAccount'"
    }
    $secondPass+=$row|select-object *,@{Name='newManagerSamAccount';Expression={$newManagerSamAccount}}
  }

  write-host "Third pass: adding new manager Distinguished Name paths..."
  $thirdPass=@()
  foreach ($row in $secondPass){
    $thisNewManagerSamAccount=$row.newManagerSamAccount   
    $newManagerDN=if($thisNewManagerSamAccount){
      $matchedRow=$secondPass|?{$_.newSamAccountName -eq $thisNewManagerSamAccount}
      $surName=$matchedRow.Surname
      $givenName=$matchedRow.GivenName
      "CN=$surName\, $givenName,"+$newOu
    }else{''}
    $thirdPass+=$row|select-object *,@{Name='newManagerDN';Expression={$newManagerDN}}
  }

  
  write-host "Forth pass: adding new email addresses..."
  $forthPass=@()
  foreach ($row in $thirdPass){
    $username=$row.newSamAccountName
    $forthPass+=$row|select-object *,@{Name='newEmailAddress';Expression={$username+$newEmailSuffix}}
  }  

  write-host "Fifth pass: generating new randomized passwords"
  $fifthPass=@()
  foreach ($row in $forthPass){
    $fifthPass+=$row|select-object *,@{Name='newPassword';Expression={[string](generateRandomPassword)}}
  }

  $newCsvContents=$fifthPass
  $conflictingUserNames=$newCsvContents|?{$_.SamAccountName -ne $_.newSamAccountName}
  write-host "There are $($conflictingUsernames.count) usernames that have conflicted with existing accounts in Active Directory. Hence, new account usernames would be modified to mitigate collisions."
  if(test-path $newCsv){remove-item $newCsv -force}
  if(!(test-path $(split-path $newCsv -parent))){mkdir $(split-path $newCsv -parent) -force}
  $oldHeaders='"'+$($csvContents[0].psobject.Properties.Name -join '","')+'"'
  $newHeaders=$oldHeaders+',"newSamAccountName","newManagerSamAccount","newManagerDN","newEmailAddress","newPassword"'
  Add-Content -Path $newCsv -Value $newHeaders
  $newCsvContents|Export-Csv $newCsv -NoTypeInformation -append
}

adAccountsCsvUpdate $originalCsvFile $newCsvFile $newEmailSuffix $newOu
$originalCsvFile='C:\temp\ActiveDirectoryUsers.csv'
$newCsvFile='C:\temp\ActiveDirectoryUsers_Updated.csv'

function updateRecordsUsingActiveDirectory($originalCsv,$newCsv){
  $csvContents=import-csv $originalCsv
  write-host "Pulling existing records from Active Directory of $env:USERDNSDOMAIN..."
  $allExistingUsers=get-aduser -Filter * -property SamAccountName,GivenName,sn,EmailAddress,Department,Description,telephoneNumber,Title,Manager,ManagedBy,City,State,postalCode,Enabled

  write-host "First pass: newSamAccountName"
  $firstPass=@()
  $count=$csvContents.count
  $itemIndex=0
  foreach ($row in $csvContents){
    $samAccountName=$row.SamAccountName
    $firstName=$row.GivenName
    $lastName=$row.sn
    $itemIndex++
    write-host "Processing $itemIndex of $count`: $samAccountName..."  
    $newSamAccountName=.{
      # Default if there are not duplicating records
      $matchedSam=$allExistingUsers|?{$_.SamAccountName -eq $samAccountName}
      if(!$matchedSam){
        return $samAccountName
      }
      # Method 1: testing firstname initials + lastname combinations
      for ($i=0;$i -lt $firstName.length;$i++){
        $testUsername=($firstName[0..$i] -join '')+$lastName
        if($testUserName -notin $allExistingUsers.SamAccountName){
          return $testUsername
        }
      }
      # Method 2: incrementing the username by a single digit
      for($i=1;$i -lt 11;$i++){
        $testUsername2=$samAccountName+$i
        if($testUserName2 -in $allExistingUsers.SamAccountName){
          return $testUsername2
        }      
      }
    }
    if($newSamAccountName -ne $samAccountName){
      write-host "SAM in CSV $samAccountName shall be updated as $newSamAccountName"
    }  
    $firstPass+=$row|select-object *,@{Name='newSamAccountName';Expression={$newSamAccountName}}
  }
  
  write-host "Second pass: newManagerSamAccount & newManagerDN"
  $secondPass=@()
  foreach ($row in $firstPass){
    $manager=.{if($row.Manager -notmatch '\s'){
        return $row.Manager
      }else{
        $managerArray=$row.Manager -split ' '
        $managerLastName=$managerArray[$managerArray.count-1]
        return $($row.Manager)[0]+$managerLastName
      }
    }
    $matchedManagerSam=$firstPass|?{$_.SamAccountName -eq $manager}
    $newManagerSamAccount=.{      
      if($matchedManagerSam){
        return $matchedManagerSam.newSamAccountName
      }else{
        return $null
      }
    }
    if($newManagerSamAccount -ne $manager){
      write-host "Manager in CSV '$manager' shall be updated as '$newManagerSamAccount'"
    }
    $newManagerDN=.{
      if($matchedManagerSam.OU){
        return "CN=$($matchedManagerSam.sn)\, $($matchedManagerSam.GivenName),"+$matchedManagerSam.OU
      }else{
        return "CN=$($row.sn)\, $($row.GivenName),"+$row.OU
      }
    }
    $secondPass+=$row|select-object *,@{Name='newManagerSamAccount';Expression={$newManagerSamAccount}},@{Name='newManagerDN';Expression={$newManagerDN}}
  }
  
  write-host "Third pass: generating new randomized passwords"
  $thirdPass=@()
  function generateRandomPassword{
    param(
        $minLength = 10,
        $maxLength = 16,
        $nonAlphaChars = 2,
        $excludeRegex='[:\$\%\&\,]',
        $replaceExclusionWith=@(',',';','!','/','{','^','+','-','*','_')
    )
    add-type -AssemblyName System.Web
    $randomLength = Get-Random -Minimum $minLength -Maximum $maxLength   
    $randomPassword = [System.Web.Security.Membership]::GeneratePassword($randomLength, $nonAlphaChars)
    $sanitizedPassword = $randomPassword -replace $excludeRegex,"$(Get-Random -InputObject $replaceExclusionWith)"
    $fixedRepeating = .{$rebuiltString=''
                        for ($i=0;$i -lt $sanitizedPassword.length;$i++){
                        $previousChar=$sanitizedPassword[$i-1]
                        $thisChar=$sanitizedPassword[$i]
                        $nextChar=$sanitizedPassword[$i+1]
                        if($thisChar -eq $nextChar){
                            do{
                                $regenChar=[char](Get-Random (65..122) )
                                }until($regenChar -ne $previousChar -and $regenChar -ne $nextChar)
                            $rebuiltString+=$regenChar
                            }
                        else{$rebuiltString+=$thisChar}
                        }
                        return $rebuiltString
                        }
                             
    return $fixedRepeating
  }

  foreach ($row in $secondPass){
    $thirdPass+=$row|select-object *,@{Name='newPassword';Expression={[string](generateRandomPassword)}}
  }

  $newCsvContents=$thirdPass
  $conflictingUserNames=$newCsvContents|?{$_.SamAccountName -ne $_.newSamAccountName}
  write-host "There are $($conflictingUsernames.count) usernames that have conflicted with existing accounts in Active Directory. Hence, new account usernames would be modified to mitigate collisions."
  if(test-path $newCsv){remove-item $newCsv -force}
  if(!(test-path $(split-path $newCsv -parent))){mkdir $(split-path $newCsv -parent) -force}
  $oldHeaders='"'+$($csvContents[0].psobject.Properties.Name -join '","')+'"'
  $newHeaders=$oldHeaders+',"newSamAccountName","newManagerSamAccount","newManagerDN","newPassword"'
  Add-Content -Path $newCsv -Value $newHeaders
  $newCsvContents|Export-Csv $newCsv -NoTypeInformation -append
}

updateRecordsUsingActiveDirectory $originalCsvFile $newCsvFile

Skillset Required as a 2021 Systems Administrator

Long were the days of a SysAdmin only requiring to know how to babysit server hardware and installing Windows machines. To survive in the landscape of 2020 and beyond, a techie would need these following skills:

  • LAMP Stack: Linux, Apache, Postgres, PHP, and MySQL
  • Coding: PHP, HTML, JavaScript, CSS, jQuery, Vue, Bash, SQL, Python, and PowerShell
  • CMS: Drupal, Joomla, WordPress, etc.
  • Database: PostgreSQL, MySQL
  • Web server: Apache and NGINX
  • Data collection and ingestion: .NET
  • Operating Systems: Linux and Windows
  • Cloud: AW, Azure, Google Cloud
  • Container Orchestration: Kubernetes (with Docker)
  • IT Automation: Jenkins, Nagios, Ansible, Git (Gitlab)
  • Disciplines: agile development
  • Communication: high verbal and written linguistic skills

MBR & GPT Disk Partitioning Comparisons

A quote one has provided to colleagues on this topic:

Although speed is the same between MBR and GPT disk partitioning, the latter supports volumes up to 18 exabytes with 128 partitions per disk while the former allows volumes up to 2 terabytes with 4 primary partitions per disk. On an MBR disk, the partitioning and boot data are stored contiguously. If this data is overwritten or corrupted, it’s difficult to recover. Conversely, GPT stores multiple copies of this data across the disk; thus, it’s easier to recover from corruptions on a GTP formatted device.

How to Become a DevOps Engineer in 2020

This note is scribbled up with about 5-minute of time. Thus, it may not have much verbiage nor pertinent details. Eh…

One would need these skills:

  1. Platforms: AWS, Azure, Google Cloud
  2. Operating Systems: Linux (Debian, Ubuntu, Redhat), Windows
  3. Container Orchestration: Docker Swarm, Kubernettes
  4. Automation: Terraform (infrastructure), Ansible, Chef, Puppet (configuration management)
  5. Build Automation: Gitlab, Github Actions, TeamCity, Jenkins
  6. Monitoring: Nagios, Prometheus
  7. Database: PostgreSQL, MySQL, mondoDB
  8. Coding: Java, PHP, Tomcat, JavaScript, Node.js, Bash (Linux), Powershell (Windows), Python (OS independent), Yaml
  9. Web Engines: Nginx, Apache, Haproxy
  10. Soft skills: communication, judgement, focus, and teamwork

Java Virtual Machine Optimal Memory Tuning

Overview:

There are five available garbage collectors (GC) for Java Virtual Machines (JVM). Here are some quick lesions-learned on each GC engines:

  1. G1: is the default for Java 9 and newer. It’s the best choice for real-time applications with rapid vertical scaling. Some tests show that this is slower than Parallel GC, although Parallel is known to oversubscribe allotted RAM limits leading to application slowness.
  2. Parallel: is the default for Java 8 or older. this engine does everything at once which will result in random lags. It’s intended for Applications where throughput is the focus not real-time usage.
  3. ConcMarkSweep (CMS): is designed to eliminate the long pause associated with the full gc of parallel & serial collector. It’s similar to G1 by using multiple background threads to scan and clear heaps
  4. Serial: good for single virtual CPU machines. Nobody uses this GC nowadays
  5. Shenandoah: is available on JDK 12, a super low-latency GC that operates mostly concurrently with the application. It’s most apppriate for gambling, finance, and latency-sensitive interactive apps. This engine costs more CPU consumption than Parallel GC.
Practical Examples:

Similar to the process of SQL server performance tuning, Java memory allocation should be manually configured according to the host’s RAM availability to ensure machine up-time with performance consistency. Bottom line is that 4G should be allocated to JVM for servers with 12 GB or less, or 80% of available memory for servers with more than 12 GB. The most important points are setting explicit Garbage collection as G1, specifying low memory heap, adding a high reserve, etc.

12GB of RAM:

java.args=-server -Xms8G -Xmx8G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=200 -XX:+UnlockExperimentalVMOptions -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:G1NewSizePercent=30 -XX:G1MaxNewSizePercent=40 -XX:G1HeapRegionSize=8M -XX:G1ReservePercent=20 -XX:G1HeapWastePercent=5 -XX:G1MixedGCCountTarget=4 -XX:InitiatingHeapOccupancyPercent=15 -XX:G1MixedGCLiveThresholdPercent=90 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:SurvivorRatio=32 -XX:+PerfDisableSharedMem -XX:MaxTenuringThreshold=1 nogui ... OTHER ARGS ...

16GB of RAM:

java.args=-server -Xms13107m -Xmx13107m -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=200 -XX:+UnlockExperimentalVMOptions -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:G1NewSizePercent=40 -XX:G1MaxNewSizePercent=50 -XX:G1HeapRegionSize=16M -XX:G1ReservePercent=15 -XX:G1HeapWastePercent=5 -XX:G1MixedGCCountTarget=4 -XX:InitiatingHeapOccupancyPercent=20 -XX:G1MixedGCLiveThresholdPercent=90 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:SurvivorRatio=32 -XX:+PerfDisableSharedMem -XX:MaxTenuringThreshold=1 nogui ... OTHER ARGS ...

12GB of RAM with ColdFusion, IIS, Windows Server 2016:

java.args=-server -Xms8G -Xmx8G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=200 -XX:+UnlockExperimentalVMOptions -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:G1NewSizePercent=30 -XX:G1MaxNewSizePercent=40 -XX:G1HeapRegionSize=8M -XX:G1ReservePercent=20 -XX:G1HeapWastePercent=5 -XX:G1MixedGCCountTarget=4 -XX:InitiatingHeapOccupancyPercent=15 -XX:G1MixedGCLiveThresholdPercent=90 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:SurvivorRatio=32 -XX:+PerfDisableSharedMem -XX:MaxTenuringThreshold=1 --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/sun.util.cldr=ALL-UNNAMED --add-opens=java.base/sun.util.locale.provider=ALL-UNNAMED -Xbatch -Djdk.attach.allowAttachSelf=true -Dcoldfusion.home={application.home} -Duser.language=en -Dcoldfusion.rootDir={application.home} -Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true -Dcoldfusion.libPath={application.home}/lib -Dorg.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER=true -Dcoldfusion.jsafe.defaultalgo=FIPS186Random -Dorg.eclipse.jetty.util.log.class=org.eclipse.jetty.util.log.JavaUtilLog -Djava.util.logging.config.file={application.home}/lib/logging.properties -Djava.locale.providers=COMPAT,SPI -Dsun.font.layoutengine=icu -Dcoldfusion.classPath={application.home}/lib/updates,{application.home}/lib,{application.home}/lib/axis2,{application.home}/gateway/lib/,{application.home}/wwwroot/WEB-INF/cfform/jars,{application.home}/wwwroot/WEB-INF/flex/jars,{application.home}/lib/oosdk/lib,{application.home}/lib/oosdk/classes

Explanations:

-Xms: sets the starting global memory heap size to prevent pauses caused by heap expansion
-Xmx: places upper boundary on the global heap size to increase the predictability of garbage collection
-XX:+UseG1GC: use the Garbage First (G1) Collector, instead of relying on Explicit GC. The Garbage-First (G1) collector is a server-style garbage collector, targeted for multi-processor machines with large memories. It meets garbage collection (GC) pause time goals with a high probability, while achieving high throughput. The G1 garbage collector is fully supported in Oracle JDK 7 update 4 and later releases. (Source: https://www.oracle.com/technetwork/tutorials/tutorials-1876574.html)
-XX:MaxGCPauseMillis: sets a target for the maximum GC pause time so the engine would use as baseline
-XX:+ParallelRefProcEnabled: multi-thread reference processing, reducing young and old GC times
-XX:MaxGCPauseMillis: sets the peak pause time expected in the environment. 250 ms as the default is adequate for most systems. When this value is set lower than 200, it causes GC to run more aggressively and less efficiently, which can steal cycles without yielding considerable benefit
-XX:+UnlockExperimentalVMOptions: required to activate experiemental parameters
-XX:+AlwaysPreTouch option the JVM touches every single byte of the max heap size with a '0', resulting in the memory being allocated in the physical memory in addition to being reserved in the internal data structure (virtual memory). Pretouching is single threaded, so it is expected behavior that it causes JVM startup to be delayed. The trade off is that it will reduce page access time later, as the pages will already be loaded into memory. (Source: https://access.redhat.com/solutions/2685771)
-XX:+DisableExplicitGC: in conjunction with -XX:+UseG1GC to force JVM to use G1GC
-XX:ParallelGCThreads: controls the parallelism of global GC phases, which should include parallel reference processing
-XX:ConcGCThreads: number of threads for garbage collectors
-XX:InitiatingHeapOccupancyPercent: percentage of the global heap size as trigger to start a concurrent GC cycle. Please note that a value of 0 denotes 'constant GC cycles', and the default value is 45
-XX:G1NewSizePercent: Sets the percentage of the heap to use as the minimum for the young generation size. The default value is 5 percent
-XX:G1MaxNewSizePercent: percentage of the heap size to use as the maximum for young generation size. The default value is 60 percent
-XX:G1HeapRegionSize: reduce fragmentation of old generation by setting this value higher
-XX:G1ReservePercent: option to increase the amount of reserve memory for next spaces
-XX:G1HeapWastePercent: percentage of heap that you are willing to allow as non-deallocation
-XX:G1MixedGCCountTarget: sets the target number of mixed garbage collections after a marking cycle to collect old regions with at most G1MixedGCLIveThresholdPercent live data
-XX:G1MixedGCLiveThresholdPercent (source: https://www.oracle.com/technical-resources/articles/java/g1gc.html)
-XX:G1RSetUpdatingPauseTimePercent: percent of the allowed maximum pause time
-XX:SurvivorRatio: controls the size of the survivor spaces. During 'young' spaces collection, every single object is copied. The Object may be copied to one of survival spaces. For each object being copied, GC algorithm increments its age (aka number of collection survived). If the age is above the current tenuring threshold, it would be copied to what is known as an 'old' space. The Object could also be copied to the old space directly if the survival space gets full - this is called an 'overflow.' If value is set too low, collection copying will overflow into the old generation. If this value is too high, some spaces will be empty.The default value should be 32 as that is known to keep the spaces half-filled.
-XX:+PerfDisableSharedMem: feature to reduce worst-case pause latencies
-XX:MaxTenuringThreshold: specifies for how many minor GC cycles an object will stay in the survivor spaces until it finally gets tenured into the old space

 

A Case for Graylog 4

Overview:

A practical real-world application to aggregate logs would be Graylog. Its current incarnation is version 4, which retained the ‘free’ nature of open source at its core while adding a pay-model for ‘enterprise’ features. Here’s a quick list of these considerations:

– It’s free and open-source
– The open-source agent, winlogbeat, is able to ship to other vendors should we switch logging app (Logstash from Elasticsearch, Elkstack). “That is currently the best-known way to ingest windows event logs into Graylog.” (source: https://docs.graylog.org/en/4.0/pages/sending/windows.html)
– It’s focused on logging analytics; thus, it would be simple to use and administer
– The alternative of installing SCOM 2019 requires more licensing and resources than a single server instance
– The open source version appears to have all the core functionality to serve the purpose of aggregating Windows logs (source: https://www.graylog.org/products/open-source-vs-enterprise).
– In the event that we need ‘correlation engine,’ ‘scheduled reports,’ and ‘search parameters’ features, the Enterprise upgrade is free as long as the engine ingests less than 5GB/day. This can mean 256MB of logging volume per server for 20 servers. winlogbeat can be configured to filter most noise to achieve this target utilization. (sources: https://community.graylog.org/t/implement-graylog-in-windows-enviroment/12801 and https://www.elastic.co/guide/en/beats/winlogbeat/6.8/configuration-winlogbeat-options.html)

Open-source logging limit calculation:

Since 5GB logging storage limit is stated by the vendor for the free open-source variant of Graylog, we’re looking at a simplified math to derive some estimates as illustrated below:

Storage limit for Graylog non-enterprise version: 5 GB or 5120 MiB / per day
Servers to be monitored: 20 Windows nodes
Average logging per server: 256 MiB
Average log size per event: 650 bytes
Aggregate event count per server: 403 events / per day