How To Upgrade NextCloud 22.1.1 to 22.2.0 When Deployed with Kubernetes & Helm

Step 1:

Navigate to nextcloud > html > edit version.php

<?php 
$OC_Version = array(22,1,1,2);
$OC_VersionString = '22.1.1';
$OC_Edition = '';
$OC_Channel = 'stable';
$OC_VersionCanBeUpgradedFrom = array (
  'nextcloud' => 
  array (
    '21.0' => true,
    '22.0' => true,
    '22.1' => true,
    '22.2' => true,   # Add this line 
  ),
  'owncloud' => 
  array (
    '10.5' => true,
  ),
);
$OC_Build = '2021-08-26T13:27:46+00:00 1eea64f2c3eb0e110391c24830cea5f8d9c3e6a1';
$vendor = 'nextcloud';

Step 2: Run the ‘helm upgrade…’ command with the desired NextCloud version

# Example:
helm upgrade nextcloud nextcloud/nextcloud \
  --set image.tag=22.2.0-fpm \
  --set nginx.enabled=true \
  --set nextcloud.host=dragoncoin.com \
  --set nextcloud.username=dragon,nextcloud.password=SOMEVERYCOMPLEXANDVERYVERYLONGPASSWORD \
  --set internalDatabase.enabled=false \
  --set externalDatabase.existingSecret.enabled=true \
  --set externalDatabase.type=postgresql \
  --set externalDatabase.host='nextcloud-db-postgresql.default.svc.cluster.local' \
  --set persistence.enabled=true \
  --set persistence.existingClaim=nextcloud-claim \
  --set persistence.size=100Ti \
  --set livenessProbe.enabled=false \
  --set readinessProbe.enabled=false \
  --set nextcloud.phpConfigs.upload_max_size=40G \
  --set nextcloud.phpConfigs.upload_max_filesize=40G \
  --set nextcloud.phpConfigs.post_max_size=40G \
  --set nextcloud.phpConfigs.memory_limit=80G

Step 3: Check the logs and wait for the upgrading process to complete

Previous pods terminated to make way for new pods

admin@controller:~$ k get pod
NAME                                              READY   STATUS        RESTARTS   AGE
nextcloud-67855fc94c-lc2xr                        0/2     Terminating   0          74m
nextcloud-db-postgresql-0                         1/1     Running       0          91m
admin@controller:~$ k get pod
NAME                                              READY   STATUS    RESTARTS   AGE
nextcloud-79b5b775fd-2s4bj                        2/2     Running   0          56s
nextcloud-db-postgresql-0                         1/1     Running   0          92m

Expected 502 errors during pod upgrades

admin@controller:~$ k logs nextcloud-79b5b775fd-2s4bj nextcloud-nginx
2021/11/01 05:36:49 [error] 32#32: *24 connect() failed (111: Connection refused) while connecting to upstream, client: 10.10.0.95, server: , request: "GET /status.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "dragoncoin.com"
10.10.0.95 - dragon [01/Nov/2021:05:36:49 +0000] "GET /status.php HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Linux) mirall/3.2.2git (build 5903) (Nextcloud, linuxmint-5.4.0-89-generic ClientArchitecture: x86_64 OsArchitecture: x86_64)" "192.168.0.164"

Logs showing that the upgrading process has progressed… and eventually completed

admin@controller:~$ kubectl logs nextcloud-79b5b775fd-2s4bj nextcloud

Initializing nextcloud 22.2.0.2 ...
Upgrading nextcloud from 22.1.1.2 ...
Initializing finished
Nextcloud or one of the apps require upgrade - only a limited number of commands are available
You may use your browser or the occ upgrade command to do the upgrade
Setting log level to debug
Turned on maintenance mode
Updating database schema
Updated database
Updating <lookup_server_connector> ...
Updated <lookup_server_connector> to 1.10.0
Updating <oauth2> ...
Updated <oauth2> to 1.10.0
Updating <files> ...
Updated <files> to 1.17.0
Updating <cloud_federation_api> ...
Updated <cloud_federation_api> to 1.5.0
Updating <dav> ...
Fix broken values of calendar objects

 Starting ...

Updated <dav> to 1.19.0
Updating <files_sharing> ...
Updated <files_sharing> to 1.14.0
Updating <files_trashbin> ...
Updated <files_trashbin> to 1.12.0
Updating <files_versions> ...
Updated <files_versions> to 1.15.0
Updating <sharebymail> ...
Updated <sharebymail> to 1.12.0
Updating <workflowengine> ...
Updated <workflowengine> to 2.4.0
Updating <systemtags> ...
Updated <systemtags> to 1.12.0
Updating <theming> ...
Updated <theming> to 1.13.0
Updating <accessibility> ...
Migrate old user config

    0/0 [>---------------------------]   0% Starting ...
    0/0 [->--------------------------]   0%
 Starting ...

Updated <accessibility> to 1.8.0
Updating <contactsinteraction> ...
Updated <contactsinteraction> to 1.3.0
Updating <federatedfilesharing> ...
Updated <federatedfilesharing> to 1.12.0
Updating <provisioning_api> ...
Updated <provisioning_api> to 1.12.0
Updating <settings> ...
Updated <settings> to 1.4.0
Updating <twofactor_backupcodes> ...
Updated <twofactor_backupcodes> to 1.11.0
Updating <updatenotification> ...
Updated <updatenotification> to 1.12.0
Updating <user_status> ...
Updated <user_status> to 1.2.0
Updating <weather_status> ...
Updated <weather_status> to 1.2.0
Checking for update of app accessibility in appstore
Checked for update of app "accessibility" in App Store
Checking for update of app activity in appstore
Checked for update of app "activity" in App Store
Checking for update of app audioplayer in appstore
Checked for update of app "audioplayer" in App Store
Checking for update of app breezedark in appstore
Checked for update of app "breezedark" in App Store
Checking for update of app bruteforcesettings in appstore
Checked for update of app "bruteforcesettings" in App Store
Checking for update of app camerarawpreviews in appstore
Checked for update of app "camerarawpreviews" in App Store
Checking for update of app cloud_federation_api in appstore
Checked for update of app "cloud_federation_api" in App Store
Checking for update of app cms_pico in appstore
Checked for update of app "cms_pico" in App Store
Checking for update of app contactsinteraction in appstore
Checked for update of app "contactsinteraction" in App Store
Checking for update of app dav in appstore
Checked for update of app "dav" in App Store
Checking for update of app documentserver_community in appstore
Checked for update of app "documentserver_community" in App Store
Checking for update of app drawio in appstore
Checked for update of app "drawio" in App Store
Checking for update of app external in appstore
Checked for update of app "external" in App Store
Checking for update of app federatedfilesharing in appstore
Checked for update of app "federatedfilesharing" in App Store
Checking for update of app files in appstore
Checked for update of app "files" in App Store
Checking for update of app files_antivirus in appstore
Checked for update of app "files_antivirus" in App Store
Checking for update of app files_markdown in appstore
Checked for update of app "files_markdown" in App Store
Checking for update of app files_mindmap in appstore
Checked for update of app "files_mindmap" in App Store
Checking for update of app files_pdfviewer in appstore
Checked for update of app "files_pdfviewer" in App Store
Checking for update of app files_rightclick in appstore
Checked for update of app "files_rightclick" in App Store
Checking for update of app files_sharing in appstore
Checked for update of app "files_sharing" in App Store
Checking for update of app files_trashbin in appstore
Checked for update of app "files_trashbin" in App Store
Checking for update of app files_versions in appstore
Checked for update of app "files_versions" in App Store
Checking for update of app files_videoplayer in appstore
Checked for update of app "files_videoplayer" in App Store
Checking for update of app forms in appstore
Checked for update of app "forms" in App Store
Checking for update of app logreader in appstore
Checked for update of app "logreader" in App Store
Checking for update of app lookup_server_connector in appstore
Checked for update of app "lookup_server_connector" in App Store
Checking for update of app maps in appstore
Checked for update of app "maps" in App Store
Checking for update of app music in appstore
Checked for update of app "music" in App Store
Checking for update of app news in appstore
Checked for update of app "news" in App Store
Checking for update of app notifications in appstore
Checked for update of app "notifications" in App Store
Checking for update of app oauth2 in appstore
Checked for update of app "oauth2" in App Store
Checking for update of app password_policy in appstore
Checked for update of app "password_policy" in App Store
Checking for update of app photos in appstore
Checked for update of app "photos" in App Store
Checking for update of app privacy in appstore
Checked for update of app "privacy" in App Store
Checking for update of app provisioning_api in appstore
Checked for update of app "provisioning_api" in App Store
Checking for update of app quicknotes in appstore
Checked for update of app "quicknotes" in App Store
Checking for update of app recommendations in appstore
Checked for update of app "recommendations" in App Store
Checking for update of app registration in appstore
Checked for update of app "registration" in App Store
Checking for update of app richdocuments in appstore
Checked for update of app "richdocuments" in App Store
Checking for update of app serverinfo in appstore
Checked for update of app "serverinfo" in App Store
Checking for update of app settings in appstore
Checked for update of app "settings" in App Store
Checking for update of app sharebymail in appstore
Checked for update of app "sharebymail" in App Store
Checking for update of app spreed in appstore
Checked for update of app "spreed" in App Store
Checking for update of app support in appstore
Checked for update of app "support" in App Store
Checking for update of app survey_client in appstore
Checked for update of app "survey_client" in App Store
Checking for update of app systemtags in appstore
Checked for update of app "systemtags" in App Store
Checking for update of app tasks in appstore
Checked for update of app "tasks" in App Store
Checking for update of app text in appstore
Checked for update of app "text" in App Store
Checking for update of app theming in appstore
Checked for update of app "theming" in App Store
Checking for update of app twofactor_backupcodes in appstore
Checked for update of app "twofactor_backupcodes" in App Store
Checking for update of app updatenotification in appstore
Checked for update of app "updatenotification" in App Store
Checking for update of app user_status in appstore
Checked for update of app "user_status" in App Store
Checking for update of app video_converter in appstore
Checked for update of app "video_converter" in App Store
Checking for update of app viewer in appstore
Checked for update of app "viewer" in App Store
Checking for update of app weather_status in appstore
Checked for update of app "weather_status" in App Store
Checking for update of app workflowengine in appstore
Checked for update of app "workflowengine" in App Store
Starting code integrity check...

After about 5 minutes (depending on the system hardware), NextCloud should be rendered back online. At this point, the upgrade has completed.

Kubernetes Ingress Error 502 Upon NextCloud Upgrades

Issue:
Just the other day, I’ve attempted to run a ‘helm upgrade…’ command on my NextCloud application. I’ve taken care to ensure that the container’s version matches that of the persistent storage’s marking (e.g. image.tag=22.1-fpm) as a variance in that would cause NextCloud not to start. However, there’s another issue that has puzzled me: a 502 Error upon navigating to the URL of the application.

Resolution:
– Check the logs
– Review Kubernetes Ingress documentation
– Realize that this specific issue requires no fixing

Checking the logs:

admin@controller:~$ k logs nextcloud-67855fc94c-lc2xr nextcloud-nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/11/01 04:18:37 [error] 34#34: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.16.90.192, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "dragoncoin.com"
... Truncated for brevity ...
2021/11/01 04:34:20 [error] 34#34: *155 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.100.95, server: , request: "GET /apps/photos/service-worker.js HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "dragoncoin.com", referrer: "https://dragoncoin.com/apps/photos/service-worker.js"
172.16.100.95 - - [01/Nov/2021:04:34:20 +0000] "GET /apps/photos/service-worker.js HTTP/1.1" 502 559 "https://dragoncoin.com/apps/photos/service-worker.js" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" "172.16.100.164"
admin@controller:~$ k logs nextcloud-67855fc94c-lc2xr nextcloud
Initializing nextcloud 22.1.1.2 ...
Upgrading nextcloud from 22.1.0.1 ...
Initializing finished
Nextcloud or one of the apps require upgrade - only a limited number of commands are available
You may use your browser or the occ upgrade command to do the upgrade
Setting log level to debug
Turned on maintenance mode
Updating database schema
Updated database
Updating <workflowengine> ...
Updated <workflowengine> to 2.3.1
Checking for update of app accessibility in appstore
Checked for update of app "accessibility" in App Store
Checking for update of app activity in appstore
Checked for update of app "activity" in App Store
Checking for update of app audioplayer in appstore
Update app audioplayer from App Store
Checked for update of app "audioplayer" in App Store
Checking for update of app breezedark in appstore
Update app breezedark from App Store
Checked for update of app "breezedark" in App Store
Checking for update of app bruteforcesettings in appstore
Checked for update of app "bruteforcesettings" in App Store
Checking for update of app camerarawpreviews in appstore
Checked for update of app "camerarawpreviews" in App Store
Checking for update of app cloud_federation_api in appstore
Checked for update of app "cloud_federation_api" in App Store
Checking for update of app cms_pico in appstore
Update app cms_pico from App Store
Repair warning: Replacing Pico CMS config file "config.yml.template"
Repair warning: Replacing Pico CMS system template "empty"
Repair warning: Replacing Pico CMS system template "sample_pico"
Repair warning: Replacing Pico CMS system theme "default"
Repair warning: Replacing Pico CMS system plugin "PicoDeprecated"
Checked for update of app "cms_pico" in App Store
Checking for update of app contactsinteraction in appstore
Checked for update of app "contactsinteraction" in App Store
Checking for update of app dav in appstore
Checked for update of app "dav" in App Store
Checking for update of app documentserver_community in appstore
Checked for update of app "documentserver_community" in App Store
Checking for update of app drawio in appstore
Checked for update of app "drawio" in App Store
Checking for update of app external in appstore
Checked for update of app "external" in App Store
Checking for update of app federatedfilesharing in appstore
Checked for update of app "federatedfilesharing" in App Store
Checking for update of app files in appstore
Checked for update of app "files" in App Store
Checking for update of app files_antivirus in appstore
Update app files_antivirus from App Store
Checked for update of app "files_antivirus" in App Store
Checking for update of app files_markdown in appstore
Checked for update of app "files_markdown" in App Store
Checking for update of app files_mindmap in appstore
Checked for update of app "files_mindmap" in App Store
Checking for update of app files_pdfviewer in appstore
Checked for update of app "files_pdfviewer" in App Store
Checking for update of app files_rightclick in appstore
Checked for update of app "files_rightclick" in App Store
Checking for update of app files_sharing in appstore
Checked for update of app "files_sharing" in App Store
Checking for update of app files_trashbin in appstore
Checked for update of app "files_trashbin" in App Store
Checking for update of app files_versions in appstore
Checked for update of app "files_versions" in App Store
Checking for update of app files_videoplayer in appstore
Checked for update of app "files_videoplayer" in App Store
Checking for update of app forms in appstore
Checked for update of app "forms" in App Store
Checking for update of app logreader in appstore
Checked for update of app "logreader" in App Store
Checking for update of app lookup_server_connector in appstore
Checked for update of app "lookup_server_connector" in App Store
Checking for update of app maps in appstore
Checked for update of app "maps" in App Store
Checking for update of app music in appstore
Update app music from App Store
Checked for update of app "music" in App Store
Checking for update of app news in appstore
Update app news from App Store
Checked for update of app "news" in App Store
Checking for update of app notifications in appstore
Checked for update of app "notifications" in App Store
Checking for update of app oauth2 in appstore
Checked for update of app "oauth2" in App Store
Checking for update of app password_policy in appstore
Checked for update of app "password_policy" in App Store
Checking for update of app photos in appstore
Checked for update of app "photos" in App Store
Checking for update of app privacy in appstore
Checked for update of app "privacy" in App Store
Checking for update of app provisioning_api in appstore
Checked for update of app "provisioning_api" in App Store
Checking for update of app quicknotes in appstore
Checked for update of app "quicknotes" in App Store
Checking for update of app recommendations in appstore
Checked for update of app "recommendations" in App Store
Checking for update of app registration in appstore
Checked for update of app "registration" in App Store
Checking for update of app richdocuments in appstore
Update app richdocuments from App Store
Checked for update of app "richdocuments" in App Store
Checking for update of app serverinfo in appstore
Checked for update of app "serverinfo" in App Store
Checking for update of app settings in appstore
Checked for update of app "settings" in App Store
Checking for update of app sharebymail in appstore
Checked for update of app "sharebymail" in App Store
Checking for update of app spreed in appstore
Update app spreed from App Store
Checked for update of app "spreed" in App Store
Checking for update of app support in appstore
Checked for update of app "support" in App Store
Checking for update of app survey_client in appstore
Checked for update of app "survey_client" in App Store
Checking for update of app systemtags in appstore
Checked for update of app "systemtags" in App Store
Checking for update of app tasks in appstore
Checked for update of app "tasks" in App Store
Checking for update of app text in appstore
Checked for update of app "text" in App Store
Checking for update of app theming in appstore
Checked for update of app "theming" in App Store
Checking for update of app twofactor_backupcodes in appstore
Checked for update of app "twofactor_backupcodes" in App Store
Checking for update of app updatenotification in appstore
Checked for update of app "updatenotification" in App Store
Checking for update of app user_status in appstore
Checked for update of app "user_status" in App Store
Checking for update of app video_converter in appstore
Update app video_converter from App Store
Checked for update of app "video_converter" in App Store
Checking for update of app viewer in appstore
Checked for update of app "viewer" in App Store
Checking for update of app weather_status in appstore
Checked for update of app "weather_status" in App Store
Checking for update of app workflowengine in appstore
Checked for update of app "workflowengine" in App Store
Starting code integrity check...

Reviewing Documentation:
According to the kubernetes ingress requirements (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc#prerequisites) the application must return a 200 status code at ‘/’. It’s a known behavior that when an application is not in a ‘ready’ state, it would return a 302 (redirect to login). If health checks are configured to run, failing results would cause the ingress resource returns 502. Even if health checks are skipped, the container would that are still in a ‘starting code integrity check…’ state would still relay non-200 statuses, which leads to Ingress to return 502 to the users.

Kubernetes: Cert-Manager x509 ECDSA verification failure

Symptoms

Error from server (InternalError): error when creating "kimconnect-cert.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "cert-manager-webhook-ca")
Warning: resource certificates/kimconnect-cert is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
Error from server (InternalError): error when applying patch:

Check Cert-Manager Pods

# Example showing multiple restarts of 'cainjector' pod
admin@controller:~$ kubectl get pod -n cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-756bb56c5-zc7sb               1/1     Running   3          77d
cert-manager-cainjector-86bc6dc648-2txgt   1/1     Running   9          77d
cert-manager-webhook-66b555bb5-t2fds       1/1     Running   1          77d

Check the logs of ‘cainjector’

# command to view the logs of 'cainjector'
kubectl logs -f -n cert-manager cert-manager-cainjector
# Sample output
W1030 18:29:32.199890       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 18:29:33.198383       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 18:30:22.202957       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 18:32:04.182442       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 18:32:15.301538       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 18:32:29.169059       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 18:32:57.294345       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 18:33:33.192557       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 18:36:11.190905       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 18:37:19.167013       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 18:38:24.202423       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 18:38:29.325957       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 18:39:07.203502       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 18:39:29.204680       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 18:42:19.189894       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 18:42:42.331353       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 18:44:34.313338       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 18:44:45.185535       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 18:45:53.167541       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 18:46:22.156119       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 18:46:42.146226       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 18:47:41.127856       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 18:50:15.340261       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 18:51:14.322043       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 18:51:15.200657       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 18:51:45.253603       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 18:52:08.236342       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 18:52:35.244945       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 18:55:26.228485       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 18:55:39.226791       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 18:56:53.331235       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 18:57:06.330913       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 18:58:01.184241       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 18:58:17.187563       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 19:01:37.188133       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 19:02:04.190773       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 19:02:29.178814       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 19:03:16.190983       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 19:03:26.323601       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 19:03:28.195785       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 19:03:31.206823       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 19:03:53.331321       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 19:08:06.209166       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 19:08:28.209724       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 19:10:34.314565       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 19:11:04.308148       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 19:11:05.171881       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 19:11:55.193059       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 19:12:10.211088       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 19:12:15.206870       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 19:13:34.212396       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 19:14:27.196581       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 19:16:13.320215       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 19:17:20.215518       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 19:18:29.330780       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 19:18:30.210758       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 19:20:00.175053       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 19:20:41.195374       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 19:22:08.337737       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 19:22:33.210625       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 19:24:24.180391       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 19:25:43.343594       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 19:25:45.216737       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 19:26:28.222022       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 19:27:02.195941       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 19:27:50.198613       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 19:28:33.178871       1 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1030 19:29:59.215813       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 19:30:05.341900       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 19:33:12.205251       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W1030 19:33:27.338509       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1030 19:34:50.217154       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W1030 19:35:39.197589       1 warnings.go:67] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration

Resolutions

Option 1: delete ‘CainJector’ Pod so that it would automatically recreate

# Command to delete pod
kubectl delete pod cert-manager-cainjector-86bc6dc648-2txgt -n cert-manager 

Option 2: patch cert-manager

# Shared by wutz: https:// github.com/jetstack/cert-manager/issues/3338

patchesJson6902:
  - target:
      kind: ClusterRole
      name: cert-manager-cainjector
      version: v1
      group: rbac.authorization.k8s.io
    patch: |-
      - op: add
        path: /rules/-
        value:
            apiGroups:
              - ""
            resources:
              - configmaps
            verbs:
              - get
              - create
              - update

Option 3: add snippet to helm file

# Shared by sullerandras (https:// github.com/jetstack/cert-manager/issues/3338)
releases:
  - name: cert-manager
    namespace: kube-system
    chart: jetstack/cert-manager
    version: v1.0.2
    values:
      - installCRDs: true
      - cainjector:
          enabled: false

Kubernetes: Cert-Manager Certificate Request YAML Example

# Set variables
certPrefix=kimconnect
domainName=kimconnect.com
domainName2=www.kimconnect.com
serviceName=kimconnectblog-wordpress
servicePort=8080

# Create a yaml file and create cert with it
cat <<EOF > $certPrefix-cert.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: $certPrefix-cert
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
    acme.cert-manager.io/http01-edit-in-place: "true"
    kubernetes.io/tls-acme: "true"
spec:
  dnsNames:
    - $domainName
    - $domainName2
  secretName: $certPrefix-cert
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
EOF
kubectl create -f $certPrefix-cert.yaml

PowerShell: Obtain List of Hyper-V Hosts via Active Directory

# listHyperVHostsInForests.ps1
# Version: 0.03

function listHyperVHostsInForests{
  # Ensure that AD management module is available for PS Session
  if (!(get-module -name "ActiveDirectory") ){
      Add-WindowsFeature RSAT-AD-PowerShell | out-null;
      import-module -name "ActiveDirectory" -DisableNameChecking | out-null;
      }

  function ListHyperVHosts {            
    [cmdletbinding()]            
    param(
      [string]$forest
    )            
    try {            
     Import-Module ActiveDirectory -ErrorAction Stop            
    } catch {            
     Write-Warning "Failed to import Active Directory module. Cannot continue. Aborting..."           
     break;
    }            

    $domains=(Get-ADForest -Identity $forest).Domains 
    foreach ($domain in $domains){
    #"$domain`: `n"
    [string]$dc=(get-addomaincontroller -DomainName $domain -Discover -NextClosestSite).HostName
    try {             
     $hyperVs = Get-ADObject -Server $dc -Filter 'ObjectClass -eq "serviceConnectionPoint" -and Name -eq "Microsoft Hyper-V"' -ErrorAction Stop;
    } catch {            
     "Failed to query $dc of $domain";         
    }            
    foreach($hyperV in $hyperVs) {            
       $x = $hyperV.DistinguishedName.split(",")            
       $HypervDN = $x[1..$x.Count] -join ","     
       if ( !($HypervDN -match "CN=LostAndFound")) {     
        $computer=Get-ADComputer -Id $HypervDN -Prop *
        $vmCount=.{
          $x=try{invoke-command -computername $computer.Name {(get-vm).count} -EA Stop}catch{-1}
          if($x -ne -1){
            return $x
          }else{
            return "Unable to probe"
          }
        }
        $thisObject=New-Object PSObject -Prop (@{
                hostname=$computer.Name
                operatingSystem=$computer.operatingSystem
                vmCount=$vmCount
                })
            $thisObject
          }           
      }
     }
  }

  function listForests{
      $GLOBAL:forests=Get-ADForest | select Name;
      if ($forests.length -gt 1){
          #for ($i=0;$i -lt $forests.length;$i++){$forests[$i].Name;}
          $forests | %{$_.Name;}
      }else{
          $forests.Name;
      }
  }
  listForests|%{ListHyperVHosts $_}
}

listHyperVHostsInForests

Sample Output

PS C:\Windows\system32> listHyperVHostsInForests

vmCount         hostname   operatingSystem
-------         --------   ---------------
Unable to probe HV01     Windows Server 2012 R2 Datacenter
Unable to probe HV02     Windows Server 2012 R2 Datacenter
Unable to probe TESTHV01   Windows Server 2019 Standard
Unable to probe TESTHV02   Windows Server 2019 Standard
9               LAX-HV01  Windows Server 2019 Datacenter
9               LAX-HV02  Windows Server 2019 Datacenter
21              LAX-HV03  Windows Server 2019 Datacenter
8               LAX-HV04  Windows Server 2019 Datacenter
14              LAX-HV05  Windows Server 2019 Datacenter
23              LAX-HV06  Windows Server 2019 Datacenter
21              LAX-HV07  Windows Server 2019 Datacenter
12              LAX-HV08   Windows Server 2019 Datacenter
16              LAX-HV09   Windows Server 2019 Datacenter
110             LAX-HV10   Windows Server 2019 Datacenter
97              LAX-HV11   Windows Server 2019 Datacenter
7               LAX-HV12  Windows Server 2019 Datacenter
32              LAX-HV13  Windows Server 2019 Datacenter
1               LAX-HV14   Windows Server 2019 Datacenter

Kubernetes: Use Helm to Deploy WordPress

Deploying WordPress in a Kubernetes cluster isn’t as straight-forward is one might expect. As the whole infrastructure is controlled by K8s, the deployed containers must be configured with the correct permissions to avoid strange issues. For example, there are certain plugins such as NextGen Gallery that would seem to work fine (initially) in a WordPress instance; however, it would suddenly ‘break’ as CSS and JS (JavaScripts) are showing 404’s. Specially, lightbox and other gallery views functions would become orphanated to leave a Gallery view without slideshow, multi-column views, and other animations. Hence, there are many lesion-learned prior issuing this blog post…

Step 1: Prepare an SSL Cert using LetsEncrypt

# Create SSL Cert, assuming the certmanager has been deployed in the Kubernetes cluster
appName=kimconnect
domainName=kimconnect.com
cat <<EOF > $appName-cert.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: $appName-cert
  namespace: $appName
  annotations:
    kubernetes.io/ingress.class: "nginx"
    acme.cert-manager.io/http01-edit-in-place: "true"
    kubernetes.io/tls-acme: "true"
spec:
  dnsNames:
    - $domainName
  secretName: $appName-cert
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
EOF
kubectl apply -f $appName-cert.yaml

Step 2: Deploy WordPress

# Create name space and enter it
appName=kimconnect
kubectl create namespace $appName
kubectl config set-context --current --namespace=$appName

# Install WordPress with Dynamic NFS Provisioning

# Documentation: https://hub.kubeapps.com/charts/bitnami/wordpress
# Set variables
appName=kimconnect
domainName=kimconnect.com
wordpressusername=kimconnect
wordpressPassword=SOMEVERYCOMPLEXPASSWORD
rootPassword=SOMEVERYCOMPLEXPASSWORD
storageClass=nfs-client

helm install $appName bitnami/wordpress \
  --set readinessProbe.enabled=false \
  --set image.tag=latest \
  --set persistence.accessMode=ReadWriteMany \
  --set persistence.storageClass=$storageClass \
  --set persistence.size=10Ti \
  --set mariadb.primary.persistence.storageClass=$storageClass \
  --set mariadb.primary.persistence.size=300Gi \
  --set wordpressUsername=$wordpressusername \
  --set wordpressPassword=$wordpressPassword \
  --set mariadb.auth.rootPassword=$rootPassword \
  --set mariadb.auth.password=$rootPassword \
  --set ingress.enabled=true,ingress.hostname=$domainName \
  --set volumePermissions.enabled=true \
  --set allowEmptyPassword=false \
  --set service.externalTrafficPolicy=Local # this setting is to make sure the source IP address is preserved.

Step 3: Patch Ingress

# Patch the deployed ingress with an existing SSL cert
appName=kimconnect
domainName=kimconnect.com
certName=$appName-cert
serviceName=$appName-wordpress
cat <<EOF> $appName-patch.yaml
metadata:
  Annotations:
    nginx.ingress.kubernetes.io/configuration-snippet: |
      location ~ /wp-admin$ {
           return 301 /wp-admin/;
       }    
spec:
  tls:
  - hosts:
    - $domainName
    secretName: $certName
  rules:
  - host: $domainName
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: $serviceName
            port:
              number: 80
EOF
kubectl patch ingress/$appName-wordpress -p "$(cat $appName-patch.yaml)"

Step 4: Patch WordPress to Add Libraries and Frameworks

# Still researching this item
# Add iocube loader while inside the container

sed -i '/\[PHP\]/a zend_extension=/bitnami/wordpress/extensions/ioncube_loader_lin_7.4.so' /opt/bitnami/php/etc/php.ini && httpd -k restart

PowerShell: Rename Hyper-V Object and Clustered Role Resource

$oldName='testWindows'
$newName='server01.intranet.kimconnect.com'

# Rename Hyper-V Clustered Resource: Guess VM
$vm=Get-clustergroup -Cluster (get-cluster).Name -Name $oldName
$vm.Name=$newName

# Rename Hyper-V Object. This must be executed on the owner node
$ownerNode=$vm.OwnerNode
invoke-command -computername $ownerNode {
    param ($oldName,$newName)
    try{
        Rename-VM $oldName -NewName $newName
        write-host "Guest VM $oldName has been renamed to $newName"
        stop-vm $newName
        $disks=(get-vm $newName|Get-VMHardDiskDrive).Path
        $primaryStoragePath=$disks|%{split-path $_ -parent}|select -first 1
        $newPath=$(split-path $primaryStoragePath -Parent)+'\'+$newName
        $null=mkdir $newPath -force    
        Move-VMStorage $newName -DestinationStoragePath $newPath
        start-vm $newName
        write-host "Old storage path: $primaryStoragePath`r`nNew Storage Path: $newPath"
        write-host "Please verify that the old storage path is empty and remove old storage path manually."
    }catch{
        write-warning $_
    }
} -Args $oldName,$newName
Sample Output:

Guest VM testWindows has been renamed to server01.intranet.kimconnect.com
Old storage path: \\fileserver\vms\testWindows
New Storage Path: \fileserver\vms\server01.intranet.kimconnect.com
Please verify that the old storage path is empty and remove old storage path manually.

Adding a Domain Security Group into the Hyper-V Administrator Users Group

Issue:

Resolution:

  1. Click Start > Control Panel > Administration Tools > Computer Management > System Tools > Local Users and Groups > Groups
  2. Double-click the Hyper-V Administrators group > Click Add > In the Enter the object names to select field, enter the user account name to whom you want to assign permissions > OK > Apply > OK
  3. Double-click the Administrators group > Click Add > In the Enter the object names to select field, enter the user account name to whom you want to assign permissions > OK > Apply > OK

Alternatively, one can run a script such as:

$remoteComputers='HyperVServer1','HyperVServer2'
$newMembers='intranet\HyperV Admins'
$localGroup='Hyper-V Administrators','Administrators'
$domainAdminCred=$null

function addUserToLocalGroup{
    param(
    $computername=$env:computername,
    $accountToAdd,
    $accountPassword=$null,
    $localGroup='Administrators',
    $domainAdminCred=$null
    )
    try{
        $session=if($domainAdminCred){
            new-pssession $computername -Credential $domainAdminCred -ea Stop
          }else{
            new-pssession $computername -ea Stop
          }        
        }
    catch{
        write-warning $_
        return $false
        }
    invoke-command -session $session -scriptblock{
        param($principleName,$password,$groupName)
        $osVersion=[System.Environment]::OSVersion.Version
        $psVersion=$PSVersionTable.PSVersion
        $computerRole=switch ((Get-WmiObject Win32_OperatingSystem -EA Silentlycontinue).ProductType){
            1 {'client'} # ClientOs
            2 {'domaincontroller'} #ServerOs with DC role
            3 {'memberserver'} #ServerOs machines
            }
        if($computerRole -eq 'domaincontroller'){
            write-warning "$env:computername is a Domain Controller. Local Users and Groups are not applicable."
            return $false
        }
        $members=if($osVersion -gt [version]'6.3.9600.0' -or $psVersion -ge [version]'5.1'){
            (get-localgroupmember $groupName).Name
        }else{
            $x=net localgroup $groupName
            $x[6..$($x.length-3)]
        }
        $localUsers=if($osVersion -gt [version]'6.3.9600.0' -or $psVersion -ge [version]'5.1'){
            (get-localuser).Name
        }else{
            $x=net user
            $x[4..$($x.length-3)] -split ' '|?{$_.trim()}
        }

        if(!($members|?{$_ -eq $principleName -or $_ -eq "$env:computername\$principleName"})){ # backward compatible with legacy PowerShell
            try{
                if(!($localUsers|?{$_ -eq $principleName}) -and $principleName -notmatch '\\'){
                    if($osVersion -gt [version]'6.3.9600.0' -or $psVersion -ge [version]'5.1'){
                        $encryptedPass = ConvertTo-SecureString $password -AsPlainText -Force
                        New-LocalUser -name $principleName -Password $encryptedPass -FullName "$principleName"
                    }else{
                        $null=net user $principleName "$password" /add /passwordreq:yes /fullname:"$principleName"
                    }            
                }
                write-host "Adding $principleName into $groupName on $env:computername"                
                if($osVersion -gt [version]'6.3.9600.0' -or $psVersion -ge [version]'5.1'){
                    Add-LocalGroupMember -Group $groupName -Member $principleName -ea Stop
                }else{
                    $null=net localgroup $groupName /add $principleName
                }
                $currentMembers=if($osVersion -gt [version]'6.3.9600.0' -or $psVersion -ge [version]'5.1'){
                    (get-localgroupmember $groupName).Name
                }else{
                    $x=net localgroup $groupName
                    $x[6..$($x.length-3)]
                }
                if($currentMembers|?{$principleName -eq $_}){
                    write-host "$principleName has been added to $groupName successfully`r`n$($currentMembers|out-string)"
                    return $true
                }else{
                    write-host "$principleName has NOT been added into group $groupName`r`n$($currentMembers|out-string)"
                    return $false
                }               
            }catch{
                write-warning "$error"
                return $false
                }
        }else{
            write-host "$principleName is already a member of $groupName."
            return $true}
        } -args $accountToAdd,$accountPassword,$localGroup
    remove-pssession $session
}
$remoteComputers|%{
    $computer=$_;
    write-host "Checking $computer..."
    $newMembers|%{addUserToLocalGroup $computer $_ $newPassword $localGroup $domainAdminCred}
}

WordPress NextGen Gallery Plugin Error

Error Message:

Failed to load plugin url: /bitnami/wordpress/wp-content/plugins/nextgen-gallery/products/photocrati_nextgen/modules/attach_to_post/static/ngg_attach_to_post_tinymce_plugin.js?ver=3.17

Resolution:

Although the root cause hasn’t been determined. This issue has automatically resolved when the running WordPress ‘container’ or ‘pod’ has been destroyed and recreated.

Posting this note here in case someone Googles similar error messages. Also, this maybe is a symptom of an unhealthy container in a pod.

How To Install Graylog in a Kubernetes Cluster Using Helm Charts

The following narrative is based on the assumption that a Kubernetes (current stable version 20.10) has been setup using MetalLB Ingress controller. This should also work with Traefik or other load balancers.

# Create a separate namespace for this project
kubectl create namespace graylog

# Change into the graylog namespace
kubectl config set-context --current --namespace=graylog
kubectl config view --minify | grep namespace: # Validate it

# Optional: delete previous test instances of graylog that have been deployed via Helm
helm delete "graylog" --namespace graylog
kubectl delete pvc --namespace graylog --all

# How to switch execution context back to the 'default' namespace
kubectl config set-context --current --namespace=default

# Optional: installing mongdb prior to Graylog
helm install "mongodb" bitnami/mongodb --namespace "graylog" \
  --set persistence.size=100Gi
# Sample output:
NAME: mongodb
LAST DEPLOYED: Thu Aug 29 00:07:36 2021
NAMESPACE: graylog
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
MongoDB&reg; can be accessed on the following DNS name(s) and ports from within your cluster:
    mongodb.graylog.svc.cluster.local
To get the root password run:
    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace graylog mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
To connect to your database, create a MongoDB&reg; client container:
    kubectl run --namespace graylog mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.8-debian-10-r9 --command -- bash
Then, run the following command:
    mongo admin --host "mongodb" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
To connect to your database from outside the cluster execute the following commands:
    kubectl port-forward --namespace graylog svc/mongodb 27017:27017 &
    mongo --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD

# REQUIRED: Pre-install ElasticSearch version 7.10 as highest being supported by Graylog 4.1.3
# Source: https://artifacthub.io/packages/helm/elastic/elasticsearch/7.10.2
helm repo add elastic https://helm.elastic.co
helm repo update
helm install elasticsearch elastic/elasticsearch --namespace "graylog" \
  --set imageTag=7.10.2 \
  --set data.persistence.size=100Gi
# Sample output:
NAME: elasticsearch
LAST DEPLOYED: Sun Aug 29 04:35:30 2021
NAMESPACE: graylog
STATUS: deployed
REVISION: 1
NOTES:
1. Watch all cluster members come up.
  $ kubectl get pods --namespace=graylog -l app=elasticsearch-master -w
2. Test cluster health using Helm test.
  $ helm test elasticsearch

# Installation of Graylog with mongodb bundled, while integrating with a pre-deployed elasticSearch instance
#
# This install command assumes that the protocol preference for transporting logs is TCP
# Also, the current helm chart does not allow mixing TCP with UDP; therefore, this approach is conveniently
# matching business requirements where a reliable transmission TCP protocol is necessary to record security data.
helm install graylog kongz/graylog --namespace "graylog" \
  --set graylog.image.repository="graylog/graylog:4.1.3-1" \
  --set graylog.persistence.size=200Gi \
  --set graylog.service.type=LoadBalancer \
  --set graylog.service.port=80 \
  --set graylog.service.loadBalancerIP=10.10.100.88 \
  --set graylog.service.externalTrafficPolicy=Local \
  --set graylog.service.ports[0].name=gelf \
  --set graylog.service.ports[0].port=12201 \
  --set graylog.service.ports[1].name=syslog \
  --set graylog.service.ports[1].port=514 \
  --set graylog.rootPassword="SOMEPASSWORD" \
  --set tags.install-elasticsearch=false \
  --set graylog.elasticsearch.version=7 \
  --set graylog.elasticsearch.hosts=http://elasticsearch-master.graylog.svc.cluster.local:9200

# Optional: add these lines if the mongodb component has been installed separately
  --set tags.install-mongodb=false \
  --set graylog.mongodb.uri=mongodb://mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local:27017/graylog?replicaSet=rs0 \

# Moreover, the graylog chart version 1.8.4 doesn't seem to set externalTrafficPolicy as expected.
# Set externalTrafficPolicy = local to preserve source client IPs
kubectl patch svc graylog-web -n graylog -p '{"spec":{"externalTrafficPolicy":"Local"}}'

# Sometimes, the static EXTERNAL-IP would be assigned to graylog-master, where graylog-web EXTERNAL-IP would
# remain in the status of <pending> indefinitely.
# Workaround: set services to share a single external IP
kubectl patch svc graylog-web -p '{"metadata":{"annotations":{"metallb.universe.tf/allow-shared-ip":"graylog"}}}'
kubectl patch svc graylog-master -p '{"metadata":{"annotations":{"metallb.universe.tf/allow-shared-ip":"graylog"}}}'
kubectl patch svc graylog-master -n graylog -p '{"spec": {"type": "LoadBalancer", "externalIPs":["10.10.100.88"]}}'
kubectl patch svc graylog-web -n graylog -p '{"spec": {"type": "LoadBalancer", "externalIPs":["10.10.100.88"]}}'

# Test sending logs to server via TCP
graylog-server=graylog.kimconnect.com
echo -e '{"version": "1.1","host":"kimconnect.com","short_message":"Short message","full_message":"This is a\n\nlong message","level":9000,"_user_id":9000,"_ip_address":"1.1.1.1","_location":"LAX"}\0' | nc -w 1 $graylog-server 514

# Test via UDP
graylog-server=graylog.kimconnect.com
echo -e '{"version": "1.1","host":"kimconnect.com","short_message":"Short message","full_message":"This is a\n\nlong message","level":9000,"_user_id":9000,"_ip_address":"1.1.1.1","_location":"LAX"}\0' | nc -u -w 1 $graylog-server 514

# Optional: graylog Ingress
cat > graylog-ingress.yaml <<EOF
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: graylog-ingress
  namespace: graylog
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # set these for SSL
    # ingress.kubernetes.io/rewrite-target: /
    # acme http01
    # acme.cert-manager.io/http01-edit-in-place: "true"
    # acme.cert-manager.io/http01-ingress-class: "true"
    # kubernetes.io/tls-acme: "true"  
spec:
  rules:
  - host: graylog.kimconnect.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: graylog-web
            port:
              number: 80
      - path: /
        pathType: Prefix
        backend:
          service:
            name: graylog-web
            port:
              number: 12201
      - path: /
        pathType: Prefix
        backend:
          service:
            name: graylog-web
            port:
              number: 514              
EOF
kubectl apply -f graylog-ingress.yaml

Troubleshooting Notes:

# Sample commands to patch graylog service components
kubectl patch svc graylog-web -p '{"spec":{"type":"LoadBalancer"}}' # Convert ClusterIP to LoadBalancer to gain ingress
kubectl patch svc graylog-web -p '{"spec":{"externalIPs":["10.10.100.88"]}}' # Add externalIPs
kubectl patch svc graylog-master -n graylog -p '{"spec":{"loadBalancerIP":""}}' # Remove loadBalancer IPs
kubectl patch svc graylog-master -n graylog -p '{"status":{"loadBalancer":{"ingress":[]}}}' # Purge ingress IPs
kubectl patch svc graylog-web -n graylog -p '{"status":{"loadBalancer":{"ingress":[{"ip":"10.10.100.88"}]}}}'
kubectl patch svc graylog-web -n graylog -p '{"status":{"loadBalancer":{"ingress":[]}}}'

# Alternative solution: mixing UDP with TCP
# The current chart version only allows this when service Type = ClusterIP (default)
helm upgrade graylog kongz/graylog --namespace "graylog" \
  --set graylog.image.repository="graylog/graylog:4.1.3-1" \
  --set graylog.persistence.size=200Gi \
  --set graylog.service.externalTrafficPolicy=Local \
  --set graylog.service.port=80 \
  --set graylog.service.ports[0].name=gelf \
  --set graylog.service.ports[0].port=12201 \
  --set graylog.service.ports[0].protocol=UDP \
  --set graylog.service.ports[1].name=syslog \
  --set graylog.service.ports[1].port=514 \
  --set graylog.service.ports[1].protocol=UDP \
  --set graylog.rootPassword="SOMEPASSWORD" \
  --set tags.install-elasticsearch=false \
  --set graylog.elasticsearch.version=7 \
  --set graylog.elasticsearch.hosts=http://elasticsearch-master.graylog.svc.cluster.local:9200

# Error message occurs when combing TCP with UDP; hence, a ClusterIP must be specified
Error: UPGRADE FAILED: cannot patch "graylog-web" with kind Service: Service "graylog-web" is invalid: spec.ports: Invalid value: []core.ServicePort{core.ServicePort{Name:"graylog", Protocol:"TCP", AppProtocol:(*string)(nil), Port:80, TargetPort:intstr.IntOrString{Type:0, IntVal:9000, StrVal:""}, NodePort:32518}, core.ServicePort{Name:"gelf", Protocol:"UDP", AppProtocol:(*string)(nil), Port:12201, TargetPort:intstr.IntOrString{Type:0, IntVal:12201, StrVal:""}, NodePort:0}, core.ServicePort{Name:"gelf2", Protocol:"TCP", AppProtocol:(*string)(nil), Port:12222, TargetPort:intstr.IntOrString{Type:0, IntVal:12222, StrVal:""}, NodePort:31523}, core.ServicePort{Name:"syslog", Protocol:"TCP", AppProtocol:(*string)(nil), Port:514, TargetPort:intstr.IntOrString{Type:0, IntVal:514, StrVal:""}, NodePort:31626}}: may not contain more than 1 protocol when type is 'LoadBalancer'

# Set array type value instead of string
Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Service.spec.externalIPs): invalid type for io.k8s.api.core.v1.ServiceSpec.externalIPs: got "string", expected "array"
# Solution:
--set "array={a,b,c}" OR --set service[0].port=80

# Graylog would not start and this was the error:
com.github.joschi.jadconfig.ValidationException: Parent directory /usr/share/graylog/data/journal for Node ID file at /usr/share/graylog/data/journal/node-id is not writable

# Workaround
graylogData=/mnt/k8s/graylog-journal-graylog-0-pvc-04dd9c7f-a771-4041-b549-5b4664de7249/
chown -fR 1100:1100 $graylogData

NAME: graylog
LAST DEPLOYED: Thu Aug 29 03:26:00 2021
NAMESPACE: graylog
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To connect to your Graylog server:
1. Get the application URL by running these commands:
  Graylog Web Interface uses JavaScript to get detail of each node. The client JavaScript cannot communicate to node when service type is `ClusterIP`.
  If you want to access Graylog Web Interface, you need to enable Ingress.
    NOTE: Port Forward does not work with web interface.
2. The Graylog root users
  echo "User: admin"
  echo "Password: $(kubectl get secret --namespace graylog graylog -o "jsonpath={.data['graylog-password-secret']}" | base64 --decode)"
To send logs to graylog:
  NOTE: If `graylog.input` is empty, you cannot send logs from other services. Please make sure the value is not empty.
        See https://github.com/KongZ/charts/tree/main/charts/graylog#input for detail

k describe pod graylog-0
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  11m                   default-scheduler  0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  11m                   default-scheduler  0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         11m                   default-scheduler  Successfully assigned graylog/graylog-0 to linux03
  Normal   Pulled            11m                   kubelet            Container image "alpine" already present on machine
  Normal   Created           11m                   kubelet            Created container setup
  Normal   Started           10m                   kubelet            Started container setup
  Normal   Started           4m7s (x5 over 10m)    kubelet            Started container graylog-server
  Warning  Unhealthy         3m4s (x4 over 9m14s)  kubelet            Readiness probe failed: Get "http://172.16.90.197:9000/api/system/lbstatus": dial tcp 172.16.90.197:9000: connect: connection refused
  Normal   Pulled            2m29s (x6 over 10m)   kubelet            Container image "graylog/graylog:4.1.3-1" already present on machine
  Normal   Created           2m19s (x6 over 10m)   kubelet            Created container graylog-server
  Warning  BackOff           83s (x3 over 2m54s)   kubelet            Back-off restarting failed container

Readiness probe failed: Get http://api/system/lbstatus: dial tcp 172.16.90.197:9000: connect: connection refused

# Set external IP
# This only works on LoadBalancer, not ClusterIP
# kubectl patch svc graylog-web -p '{"spec":{"externalIPs":["10.10.100.88"]}}'
# kubectl patch svc graylog-master -p '{"spec":{"externalIPs":[]}}'

kubectl patch service graylog-web --type='json' -p='[{"op": "add", "path": "/metadata/annotations/kubernetes.io~1ingress.class", "value":"nginx"}]'

# Set annotation to allow shared IPs between 2 different services
kubectl annotate service graylog-web metallb.universe.tf/allow-shared-ip=graylog
kubectl annotate service graylog-master metallb.universe.tf/allow-shared-ip=graylog

metadata:
  name: $serviceName-tcp
  annotations:
    metallb.universe.tf/address-pool: default
    metallb.universe.tf/allow-shared-ip: psk

# Ingress
appName=graylog
domain=graylog.kimconnect.com
deploymentName=graylog-web
containerPort=9000
cat <<EOF> $appName-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: $appName-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # ingress.kubernetes.io/rewrite-target: /
    # acme http01
    # acme.cert-manager.io/http01-edit-in-place: "true"
    # acme.cert-manager.io/http01-ingress-class: "true"
    # kubernetes.io/tls-acme: "true"
spec:
  rules:
  - host: $domain
    http:
      paths:
      - backend:
          service:
            name: $deploymentName
            port:
              number: 9000
        path: /
        pathType: Prefix
EOF
kubectl apply -f $appName-ingress.yaml

# delete pvc's
namespace=graylog
kubectl delete pvc data-graylog-elasticsearch-data-0 -n $namespace
kubectl delete pvc data-graylog-elasticsearch-master-0 -n $namespace
kubectl delete pvc datadir-graylog-mongodb-0 -n $namespace
kubectl delete pvc journal-graylog-0 -n $namespace

# delete all pvc's in namespace the easier way
namespace=graylog
kubectl get pvc -n $namespace | awk '$1 {print$1}' | while read vol; do kubectl delete pvc/${vol} -n $namespace; done

2021-08-20 20:19:41,048 INFO    [cluster] - Exception in monitor thread while connecting to server mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local:27017 - {}
com.mongodb.MongoSocketException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local
        at com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:211) ~[graylog.jar:?]
        at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:75) ~[graylog.jar:?]
        at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[graylog.jar:?]
        at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[graylog.jar:?]
        at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) [graylog.jar:?]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_302]
Caused by: java.net.UnknownHostException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local
        at java.net.InetAddress.getAllByName0(InetAddress.java:1281) ~[?:1.8.0_302]
        at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[?:1.8.0_302]
        at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[?:1.8.0_302]
        at com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:203) ~[graylog.jar:?]
        ... 5 more

2021-08-20 20:19:42,981 INFO    [cluster] - No server chosen by com.mongodb.client.internal.MongoClientDelegate$1@69419d59 from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local}, caused by {java.net.UnknownHostException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local}}]}. Waiting for 30000 ms before timing out - {}

# Alternative version - that doesn't work
# helm repo add groundhog2k https://groundhog2k.github.io/helm-charts/
# helm install graylog groundhog2k/graylog --namespace "graylog" \
#   --set image.tag=4.1.3-1 \
#   --set settings.http.publishUri='http://127.0.0.1:9000/' \
#   --set service.type=LoadBalancer \
#   --set service.loadBalancerIP=192.168.100.88 \
#   --set elasticsearch.enabled=true \
#   --set mongodb.enabled=true

# helm upgrade graylog groundhog2k/graylog --namespace "graylog" \
#   --set image.tag=4.1.3-1 \
#   --set settings.http.publishUri=http://localhost:9000/ \
#   --set service.externalTrafficPolicy=Local \
#   --set service.type=LoadBalancer \
#   --set service.loadBalancerIP=192.168.100.88 \
#   --set elasticsearch.enabled=true \
#   --set mongodb.enabled=true \
#   --set storage.className=nfs-client \
#   --set storage.requestedSize=200Gi

# kim@linux01:~$ k logs graylog-0
# 2021-08-29 03:47:09,345 ERROR: org.graylog2.bootstrap.CmdLineTool - Invalid configuration
# com.github.joschi.jadconfig.ValidationException: Couldn't run validator method
#         at com.github.joschi.jadconfig.JadConfig.invokeValidatorMethods(JadConfig.java:227) ~[graylog.jar:?]
#         at com.github.joschi.jadconfig.JadConfig.process(JadConfig.java:100) ~[graylog.jar:?]
#         at org.graylog2.bootstrap.CmdLineTool.processConfiguration(CmdLineTool.java:420) [graylog.jar:?]
#         at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:236) [graylog.jar:?]
#         at org.graylog2.bootstrap.Main.main(Main.java:45) [graylog.jar:?]
# Caused by: java.lang.reflect.InvocationTargetException
#         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_302]
#         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_302]
#         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_302]
#         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_302]
#         at com.github.joschi.jadconfig.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:53) ~[graylog.jar:?]
#         at com.github.joschi.jadconfig.JadConfig.invokeValidatorMethods(JadConfig.java:221) ~[graylog.jar:?]
#         ... 4 more
# Caused by: java.lang.IllegalArgumentException: URLDecoder: Illegal hex characters in escape (%) pattern - For input string: "!s"
#         at java.net.URLDecoder.decode(URLDecoder.java:194) ~[?:1.8.0_302]
#         at com.mongodb.ConnectionString.urldecode(ConnectionString.java:1035) ~[graylog.jar:?]
#         at com.mongodb.ConnectionString.urldecode(ConnectionString.java:1030) ~[graylog.jar:?]
#         at com.mongodb.ConnectionString.<init>(ConnectionString.java:336) ~[graylog.jar:?]
#         at com.mongodb.MongoClientURI.<init>(MongoClientURI.java:256) ~[graylog.jar:?]
#         at org.graylog2.configuration.MongoDbConfiguration.getMongoClientURI(MongoDbConfiguration.java:59) ~[graylog.jar:?]
#         at org.graylog2.configuration.MongoDbConfiguration.validate(MongoDbConfiguration.java:64) ~[graylog.jar:?]
#         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_302]
#         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_302]
#         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_302]
#         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_302]
#         at com.github.joschi.jadconfig.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:53) ~[graylog.jar:?]
#         at com.github.joschi.jadconfig.JadConfig.invokeValidatorMethods(JadConfig.java:221) ~[graylog.jar:?]

How To Configure Alternative Storage for a Kubernetes (K8s) Worker Node

The below illustration is assuming that one has a local RAID mount being added to a worker node due to it’s lack of local storage to run kubelets and docker containers

# On K8s controller, remove worker node
kubectl drain node linux03 --ignore-damonsets
kubectl delete node linux03

# On the worker node uninstall docker & kubelet
sudo apt-get remove docker-ce docker-ce-cli containerd.io kubelet

# Check the health of its RAID mount /dev/md0
mdadm --detail /dev/md0

# Sample expected output:
           Version : 1.2
     Creation Time : Fri Aug 13 23:46:13 2021
        Raid Level : raid10
        Array Size : 1953257472 (1862.77 GiB 2000.14 GB)
     Used Dev Size : 976628736 (931.39 GiB 1000.07 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent
     Intent Bitmap : Internal
       Update Time : Sat Aug 28 23:39:08 2021
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0
            Layout : near=2
        Chunk Size : 512K
Consistency Policy : bitmap
              Name : linux03:0  (local to host linux03)
              UUID : 
            Events : 1750
    Number   Major   Minor   RaidDevice State
       0       8       97        0      active sync set-A   /dev/sdg1
       1       8       81        1      active sync set-B   /dev/sdf1
       2       8       17        2      active sync set-A   /dev/sdb1
       3       8        1        3      active sync set-B   /dev/sda1

# Check the logical mount
mount=/nfs-share
df -hT -P $mount

# Sample expected output:
root@linux03:/home/kimconnect# df -hT -P $mount
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md0       ext4  1.8T   77M  1.7T   1% /nfs-share

# Prepare docker & kubelet redirected links
source1=/nfs-share/linux03/docker
source2=/nfs-share/linux03/kubelet
destinationdirectory=/var/lib/
sudo mkdir -p $source1
sudo mkdir -p $source2

# Optional: remove existing docker & kubelet directories
rm -rf /var/lib/kubelet
rm -rf /var/lib/docker

# Create links
sudo ln -sfn $source1 $destinationdirectory
sudo ln -sfn $source2 $destinationdirectory

# Verify
ls -la /var/lib

# Expected output:
root@linux03:/home/kim# ls /var/lib -la
total 180
drwxr-xr-x 45 root      root      4096 Aug 28 00:38 .
drwxr-xr-x 13 root      root      4096 Feb  1  2021 ..
drwxr-xr-x  4 root      root      4096 Feb  1  2021 AccountsService
drwxr-xr-x  5 root      root      4096 Aug 28 00:24 apt
drwxr-xr-x  2 root      root      4096 Sep 10  2020 boltd
drwxr-xr-x  2 root      root      4096 Aug 27 21:21 calico
drwxr-xr-x  8 root      root      4096 Aug 28 00:34 cloud
drwxr-xr-x  4 root      root      4096 Aug 27 23:52 cni
drwxr-xr-x  2 root      root      4096 Aug 27 19:38 command-not-found
drwx--x--x 11 root      root      4096 Aug 27 20:24 containerd
drwxr-xr-x  2 root      root      4096 Aug 27 19:57 dbus
drwxr-xr-x  2 root      root      4096 Apr 10  2020 dhcp
lrwxrwxrwx  1 root      root        25 Aug 27 23:24 docker -> /nfs-share/linux03/docker
drwxr-xr-x  3 root      root      4096 Aug 27 21:15 dockershim
drwxr-xr-x  7 root      root      4096 Aug 28 00:24 dpkg
drwxr-xr-x  3 root      root      4096 Feb  1  2021 fwupd
drwxr-xr-x  2 root      root      4096 Apr 20  2020 git
drwxr-xr-x  4 root      root      4096 Aug 27 19:39 grub
drwxr-xr-x  2 root      root      4096 Aug 27 19:51 initramfs-tools
lrwxrwxrwx  1 root      root        26 Aug 28 00:38 kubelet -> /nfs-share/linux03/kubelet
### truncated for brevity ###

# Reinstall docker & kubernetes
version=1.20.10-00
apt-get install -qy --allow-downgrades --allow-change-held-packages kubeadm=$version kubelet=$version kubectl=$version docker-ce docker-ce-cli containerd.io nfs-common
apt-mark hold kubeadm kubelet kubectl

I may consider making another illustration for NFS mounts. It may not be necessary as the instructions would be mostly similar. The difference would be that one must ensure that the worker node automatically mounts the nfs share upon reboots. The command to make symbolic soft-links would be the same.

PowerShell: Quick Snippet to Purge All ‘Orphaned’ Records of Resources in VMM

The Script:

# removeMissingResourcesInVmm.ps1
$noConfirmations=$false # $true = no confirmations, $false=confirm each stale record removal
function removeMissingResourcesInVmm($noConfirmations=$false){

  function confirmation($content,$testValue="I confirm",$maxAttempts=3){
    $confirmed=$false
    $cancelCondition=@('cancel','no','exit','nope')
    $attempts=0
    clear-host 
    write-host $($content|out-string).trim()
    write-host "`r`nPlease review this content for accuracy.`r`n"
    while ($attempts -le $maxAttempts){
        if($attempts++ -ge $maxAttempts){
            write-host "A maximum number of attempts have reached. No confirmations received!`r`n"
            break;
            }
        $userInput = Read-Host -Prompt "Please type in this value => $testValue <= to confirm. Input CANCEL to skip this item";
        if ($userInput.ToLower() -eq $testValue.ToLower()){
            $confirmed=$true;
            write-host "Confirmed!`r`n";
            break;                
        }elseif($userInput.tolower() -in $cancelCondition){
            write-host 'Cancel command received.'
            $confirmed=$false
            break
        }else{
            clear-host
            $content|write-host
            write-host "Attempt number $attempts of $maxAttempts`: $userInput does not match $testValue. Try again or Input CANCEL to skip this item`r`n"
            }
        }
    return $confirmed
  }

  try{
    write-host "Removing all stale ISO records of resources from the VMM Library"
    if($noConfirmations){
      Get-SCISO | where {$_.State -eq "missing"} | Remove-SCISO
    }else{
      $confirmed=confirmation "Remove orphaned ISO records from VMM"
      if($confirmed){
        Get-SCISO | where {$_.State -eq "missing"} | Remove-SCISO
      }else{
        write-warning "Skipped orphaned ISO records removal"
      }
    }
     
    write-host "Removing all stale Custom Script records of resources from the VMM Library"
    if($noConfirmations){
      Get-SCScript | where {$_.State -eq "missing"} | Remove-SCScript
    }else{
      $confirmed=confirmation "Remove orphaned Custom Script records from VMM"
      if($confirmed){
        Get-SCScript | where {$_.State -eq "missing"} | Remove-SCScript
      }else{
        write-warning "Skipped orphaned Custom Script records removal"
      }
    }
       
    write-host "Removing all stale Driver records of resources from the VMM Library"
    if($noConfirmations){
      Get-SCDriverPackage | where {$_.State -eq "missing"} | Remove-SCDriverPackage
    }else{
      $confirmed=confirmation "Remove orphaned Driver records from VMM"
      if($confirmed){
        Get-SCDriverPackage | where {$_.State -eq "missing"} | Remove-SCDriverPackage
      }else{
        write-warning "Skipped orphaned Driver records removal"
      }
    }
     
    write-host "Removing all stale Application records of resources from the VMM Library"
    if($noConfirmations){
      Get-SCApplicationPackage | where {$_.State -eq "missing"} | Remove-SCApplicationPackage
    }else{
      $confirmed=confirmation "Remove orphaned Application records from VMM"
      if($confirmed){
        Get-SCApplicationPackage | where {$_.State -eq "missing"} | Remove-SCApplicationPackage
      }else{
        write-warning "Skipped orphaned Application records removal"
      }
    }
     
    write-host "Removing all stale Custom Resource records of resources from the VMM Library"
    if($noConfirmations){
      Get-SCCustomResource | where {$_.State -eq "missing"} | Remove-SCCustomResource
    }else{
      $confirmed=confirmation "Remove orphaned Custom Resource records from VMM"
      if($confirmed){
        Get-SCCustomResource | where {$_.State -eq "missing"} | Remove-SCCustomResource
      }else{
        write-warning "Skipped orphaned Custom Resource records removal"
      }
    }
     
    write-host "Removing all stale Virtual Disk records of resources from the VMM Library"
    if($noConfirmations){
      Get-SCVirtualHardDisk | where {$_.State -eq "missing"} | Remove-SCVirtualHardDisk
    }else{
      $confirmed=confirmation "Remove orphaned Virtual Disk records from VMM"
      if($confirmed){
        Get-SCVirtualHardDisk | where {$_.State -eq "missing"} | Remove-SCVirtualHardDisk
      }else{
        write-warning "Skipped orphaned Virtual Disk records removal"
      }
    }  
      
    # Remove stale virtual machine records in VMM
    # Note: this does not delete the VMs from Hyper-V
    $missingVms=Get-SCVirtualMachine|?{$_.status -eq 'Missing'}  
    write-host "There are $($missingVms.count) missing VMs being detected."
    if($missingVms){
      foreach ($missingVm in $missingVms){
        if($noconfirmation){
          Remove-SCVirtualMachine -vm $missingVm -Force
        }else{
          $confirmed=confirmation "Remove VM $($missingVm.Name) from VMM"
          if($confirmed){
            Remove-SCVirtualMachine -vm $missingVm -Force
          }
        }
      }
    }else{
      write-host "There are no virtual machines with current status of 'missing' to clear." -ForegroundColor Green
    }
    return $true
  }catch{
    write-warning $_
    return $false
  }
}

removeMissingResourcesInVmm $noConfirmations

The Individual Commands:

# How to purge all erroneous records of resources from the VMM Library

# ISOs
Get-SCISO | where {$_.State -eq "missing"} | Remove-SCISO

# Custom Scripts
Get-SCScript | where {$_.State -eq "missing"} | Remove-SCScript

# Drivers
Get-SCDriverPackage | where {$_.State -eq "missing"} | Remove-SCDriverPackage

# Applications
Get-SCApplicationPackage | where {$_.State -eq "missing"} | Remove-SCApplicationPackage

# Custom Resources
Get-SCCustomResource | where {$_.State -eq "missing"} | Remove-SCCustomResource

# Virtual Disks
Get-SCVirtualHardDisk | where {$_.State -eq "missing"} | Remove-SCVirtualHardDisk

# Virtual Machines
# Remove stale virtual machine records in VMM: VMs with 
# Note: this does not delete the VMs from Hyper-V
$missingVms=Get-SCVirtualMachine|?{$_.status -eq 'Missing'}
function confirmation($content,$testValue="I confirm",$maxAttempts=3){
  $confirmed=$false
  $cancelCondition=@('cancel','no','exit','nope')
  $attempts=0
  clear-host 
  write-host $($content|out-string).trim()
  write-host "`r`nPlease review this content for accuracy.`r`n"
  while ($attempts -le $maxAttempts){
      if($attempts++ -ge $maxAttempts){
          write-host "A maximum number of attempts have reached. No confirmations received!`r`n"
          break;
          }
      $userInput = Read-Host -Prompt "Please type in this value => $testValue <= to confirm. Input CANCEL to skip this item";
      if ($userInput.ToLower() -eq $testValue.ToLower()){
          $confirmed=$true;
          write-host "Confirmed!`r`n";
          break;                
      }elseif($userInput.tolower() -in $cancelCondition){
          write-host 'Cancel command received.'
          $confirmed=$false
          break
      }else{
          clear-host
          $content|write-host
          write-host "Attempt number $attempts of $maxAttempts`: $userInput does not match $testValue. Try again or Input CANCEL to skip this item`r`n"
          }
      }
  return $confirmed
}

write-host "There are $($missingVms.count) missing VMs being detected."
foreach ($missingVm in $missingVms){
  $confirmed=confirmation "Remove VM $($missingVm.Name) from VMM"
  if($confirmed){
    Remove-SCVirtualMachine -vm $missingVm -Force
  }  
}

Sample Outputs:

PS C:\Windows\system32> Get-SCISO | where {$_.State -eq "missing"} | Remove-SCISO
Release               :
State                 : Missing
LibraryShareId        : 00000000-0000-0000-0000-000000000000
SharePath             : C:\Windows\system32\vmguest.iso
FileShare             :
Directory             : C:\Windows\system32
Size                  : 0
IsOrphaned            : False
FamilyName            :
Namespace             :
ReleaseTime           :
HostVolumeId          :
HostVolume            :
Classification        :
HostId                : 
HostType              : VMHost
HostName              : hv1.intranet.kimconnect.com
VMHost                : hv1.intranet.kimconnect.com
LibraryServer         :
CloudId               :
Cloud                 :
LibraryGroup          :
GrantedToList         : {}
UserRoleID            : 00000000-0000-0000-0000-000000000000
UserRole              :
Owner                 :
ObjectType            : ISO
Accessibility         : Public
Name                  : vmguest
IsViewOnly            : False
Description           :
AddedTime             : 7/22/1920 9:04:52 AM
ModifiedTime          : 7/22/1920 9:04:52 AM
Enabled               : True
MostRecentTask        :
ServerConnection      : Microsoft.SystemCenter.VirtualMachineManager.Remoting.ServerConnection
ID                    : 862c2f67-4c2c-4588-8a4f-16ed3c64366f
MarkedForDeletion     : True
IsFullyCached         : True
MostRecentTaskIfLocal :

### Truncated similar outputs ### 

# Checking custom resources
PS C:\Windows\system32> Get-SCCustomResource|select name
Name
----
SAV_x86_en-US_4.9.305.198.cr
WebDeploy_x86_en-US_3.1237.1764.cr
WebDeploy_x64_en-US_3.1237.1764.cr
SAV_x64_en-US_4.9.305.198.cr

Problem: NextCloud Would Not Start Due to Versioning Variance

This issue has occurred when NextCloud has been upgraded after deployment. Its source docker container may specify an older version as compared to the running instance. This discrepancy will cause the pod to fail to re-create or start as a new container as show below:

# Pod scheduling status yields 'Error'
kimconnect@k8sController:~$ k get pod
NAME                                              READY   STATUS    RESTARTS   AGE
clamav-0                                          1/1     Running   0          6d23h
collabora-collabora-code-69d74c979f-jp4p2         1/1     Running   0          6d19h
nextcloud-6cf9c65d85-42dx7                        1/2     Error     1          6s
nextcloud-db-postgresql-0                         1/1     Running   0          7d1h

# Further examination of the problem...
kimconnect@k8sController:~$ k describe pod nextcloud-6cf9c65d85-l9b99
Name:         nextcloud-6cf9c65d85-l9b99
Namespace:    default
Priority:     0
Node:         workder05/10.10.100.95
Start Time:   Fri, 20 Aug 2021 23:48:23 +0000
Labels:       app.kubernetes.io/component=app
              app.kubernetes.io/instance=nextcloud
              app.kubernetes.io/name=nextcloud
              pod-template-hash=6cf9c65d85
Annotations:  cni.projectcalico.org/podIP: 172.16.90.126/32
              cni.projectcalico.org/podIPs: 172.16.90.126/32
Status:       Running
IP:           172.16.90.126
IPs:
  IP:           172.16.90.126
Controlled By:  ReplicaSet/nextcloud-6cf9c65d85
Containers:
  nextcloud:
    Container ID:   docker://4c202d2155dea39739db815feae271fb8f14438f44092049f3d55c70fbf819c0
    Image:          nextcloud:stable-fpm
    Image ID:       docker-pullable://nextcloud@sha256:641b1dc10b681e1245c6f5d6d366fa1cd7e018ff787cf690c1aa372ddc108671
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 20 Aug 2021 23:54:03 +0000
      Finished:     Fri, 20 Aug 2021 23:54:03 +0000
    Ready:          False
    Restart Count:  6
    Environment:
      POSTGRES_HOST:              nextcloud-db-postgresql.default.svc.cluster.local
      POSTGRES_DB:                nextcloud
      POSTGRES_USER:              <set to the key 'db-username' in secret 'nextcloud-db'>      Optional: false
      POSTGRES_PASSWORD:          <set to the key 'db-password' in secret 'nextcloud-db'>      Optional: false
      NEXTCLOUD_ADMIN_USER:       <set to the key 'nextcloud-username' in secret 'nextcloud'>  Optional: false
      NEXTCLOUD_ADMIN_PASSWORD:   <set to the key 'nextcloud-password' in secret 'nextcloud'>  Optional: false
      NEXTCLOUD_TRUSTED_DOMAINS:  kimconnect.com
      NEXTCLOUD_DATA_DIR:         /var/www/html/data
    Mounts:
      /usr/local/etc/php-fpm.d/memory_limit from nextcloud-phpconfig (rw,path="memory_limit")
      /usr/local/etc/php-fpm.d/post_max_size from nextcloud-phpconfig (rw,path="post_max_size")
      /usr/local/etc/php-fpm.d/upload_max_filesize from nextcloud-phpconfig (rw,path="upload_max_filesize")
      /usr/local/etc/php-fpm.d/upload_max_size from nextcloud-phpconfig (rw,path="upload_max_size")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bdhxv (ro)
      /var/www/ from nextcloud-data (rw,path="root")
      /var/www/html from nextcloud-data (rw,path="html")
      /var/www/html/config from nextcloud-data (rw,path="config")
      /var/www/html/custom_apps from nextcloud-data (rw,path="custom_apps")
      /var/www/html/data from nextcloud-data (rw,path="data")
      /var/www/html/themes from nextcloud-data (rw,path="themes")
      /var/www/tmp from nextcloud-data (rw,path="tmp")
  nextcloud-nginx:
    Container ID:   docker://1fae573d1a0591058ad55f939b4762f01c7a5f6e7275d2348ff1bd287e077fe5
    Image:          nginx:alpine
    Image ID:       docker-pullable://nginx@sha256:e20c21e530f914fb6a95a755924b1cbf71f039372e94ac5ddcf8c3b386a44615
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 20 Aug 2021 23:48:26 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/nginx/nginx.conf from nextcloud-nginx-config (rw,path="nginx.conf")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bdhxv (ro)
      /var/www/ from nextcloud-data (rw,path="root")
      /var/www/html from nextcloud-data (rw,path="html")
      /var/www/html/config from nextcloud-data (rw,path="config")
      /var/www/html/custom_apps from nextcloud-data (rw,path="custom_apps")
      /var/www/html/data from nextcloud-data (rw,path="data")
      /var/www/html/themes from nextcloud-data (rw,path="themes")
      /var/www/tmp from nextcloud-data (rw,path="tmp")
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  nextcloud-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nextcloud-claim
    ReadOnly:   false
  nextcloud-phpconfig:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nextcloud-phpconfig
    Optional:  false
  nextcloud-nginx-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nextcloud-nginxconfig
    Optional:  false
  default-token-bdhxv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-bdhxv
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/nextcloud-6cf9c65d85-l9b99 to linux05
  Normal   Pulled     10m                  kubelet            Container image "nginx:alpine" already present on machine
  Normal   Created    10m                  kubelet            Created container nextcloud-nginx
  Normal   Started    10m                  kubelet            Started container nextcloud-nginx
  Normal   Created    9m47s (x4 over 10m)  kubelet            Created container nextcloud
  Normal   Started    9m46s (x4 over 10m)  kubelet            Started container nextcloud
  Normal   Pulled     8m55s (x5 over 10m)  kubelet            Container image "nextcloud:stable-fpm" already present on machine
  Warning  BackOff    18s (x51 over 10m)   kubelet            Back-off restarting failed container

# Checking the logs
kimconnect@k8sController:~$ k logs nextcloud-6cf9c65d85-l9b99 nextcloud
Can't start Nextcloud because the version of the data (21.0.4.1) is higher than the docker image version (20.0.8.1) and downgrading is not supported. Are you sure you have pulled the newest image version?

Solution:

# a. Create a backup copy of version.php
  sudo mount $nfsServer:/volume1/nextcloud /mnt/nextcloud
  cd /mnt/nextcloud/html
  cp version.php version.php.bak

# b. Edit the version.php file with this content
  vim version.php
########
# <?php
# $OC_Version = array(21,0,4,1); # change this value to array(20,0,8,1)
# $OC_VersionString = '21.0.4'; # change this value to '20.0.8'
# $OC_Edition = '';
# $OC_Channel = 'stable';
# $OC_VersionCanBeUpgradedFrom = array (
#   'nextcloud' =>
#   array (
#     '20.0' => true,
#     '21.0' => true,
#   ),
#   'owncloud' =>
#   array (
#     '10.5' => true,
#   ),
# );
# $OC_Build = '2021-08-03T15:44:43+00:00 c52fea0b16690b492f6c4175e1ae71d488936244';
# $vendor = 'nextcloud';
########

# c. Recreate the failed pod and verify that it's in 'running status'

kimconnect@k8sController:~$ k delete pod nextcloud-6cf9c65d85-l9b99
pod "nextcloud-6cf9c65d85-l9b99" deleted
kimconnect@k8sController:~$ k get pod
NAME                                              READY   STATUS    RESTARTS   AGE
clamav-0                                          1/1     Running   0          6d23h
collabora-collabora-code-69d74c979f-jp4p2         1/1     Running   0          6d19h
nextcloud-6cf9c65d85-dmg2s                        2/2     Running   0          17s
nextcloud-db-postgresql-0                         1/1     Running   0          7d1h

# d. Revert changes to version.php

cd /mnt/nextcloud/html
mv version.php version.php.old
mv version.php.bak version.php

How To Move WordPress Site To Kubernetes Cluster

a. Create backups of source files and database

  - Logon to Current Hosting Provider to make backups
  - Files:
    - Assuming cPanel:
      - Login to cPanel
      - Click on 'File Manager'
      - Select public_html or the directory containing WordPress files
      - Select Compress from the top-right menu
      - Select 'Bzip2ed Tar Archive' (better compression than Gzip)
      - Click 'Compress File(s)' and wait for the process to finish
      - Right-click the newly generated public_html.tar.bz2 from cPanel File Manager > select Download
      - Find the file in a default download directory (e.g. /home/$(whoami)/Downloads/public_html.tar.bz2)
  - Database:
    - Assuming cPanel with phpMyAdmin
      - Click 'phpMyAdmin' from the 'DATABASES' control group
      - Click 'Export'
      - Set Export method = Quick, Format = Custom
      - Click Go
      - Find the *.sql file being downloaded into a default download directory (e.g. /home/$(whoami)/Downloads/localhost.sql)

b. Install Bitnami WordPress in a Kubernetes Cluster

# Add helm chart if not already available
helm repo add bitnami https://charts.bitnami.com/bitnami

# Install WordPress with Dynamic NFS Provisioning
# Documentation: https://hub.kubeapps.com/charts/bitnami/wordpress/10.0.1
# Set variables
appName=kimconnectblog
domainName=blog.kimconnect.com
wordpressusername=kimconnect
wordpressPassword=SOMEPASSWORDHERE
rootPassword=SOMEPASSWORDHERE2
storageClass=nfs-client
# Install
helm install $appName bitnami/wordpress \
  --set persistence.accessMode=ReadWriteMany,persistence.storageClass=nfs-client \
  --set mariadb.primary.persistence.storageClass=nfs-client \
  --set wordpressUsername=$wordpressusername,wordpressPassword=$wordpressPassword \
  --set mariadb.auth.rootPassword=$rootPassword \
  --set mariadb.auth.password=$rootPassword \
  --set ingress.enabled=true,ingress.hostname=$domainName
# Patch the deployed ingress with an existing SSL cert
# Assuming the $appName-cert has already been generated
appName=kimconnectblog
domainName=blog.kimconnect.com
certName=$appName-cert
serviceName=$appName-wordpress
servicePort=80
cat <<EOF > $appName-patch.yaml
spec:
  tls:
  - hosts:
    - $domainName
    secretName: $certName
  rules:
  - host: $domainName
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: $serviceName
            port:
              number: $servicePort         
EOF
kubectl patch ingress/$appName-wordpress -p "$(cat $appName-patch.yaml)"

c. Import files and database onto new hosting server

  - Database:
    - Access DB server and import sql dump
      podName=kimconnectblog-mariadb-0
      kubectl exec --stdin --tty $podName -- /bin/bash
      rootPassword=SOMEPASSWORD
      echo "show databases;" | mysql -u root -p$rootPassword
      MariaDB [(none)]> show databases;exit;
        +--------------------+
        | Database           |
        +--------------------+
        | bitnami_wordpress  |
        | information_schema |
        | mysql              |
        | performance_schema |
        | test               |
        +--------------------+
        5 rows in set (0.009 sec)
      oldDb=kimconne_blog
      sqlDump=/bitnami/mariadb/data/kimconnect.sql
      mysql -uroot -p$rootPassword test < $sqlDump
      grantUser=bn_wordpress # this is the default Bitnami WordPress user
      echo "GRANT ALL PRIVILEGES ON $oldDb.* TO $grantUser;" | mysql -uroot -p$rootPassword
      #echo "create database $databaseName;" | mysql -uroot -p$rootPassword
      #mysql -uroot -p$rootPassword $oldDb -sNe 'show tables' | while read table; do mysql -uroot -p$rootPassword -sNe "RENAME TABLE $oldDb.$table TO $newDb.$table"; done
      #echo "create user kimconne_blog@localhost;grant all privileges on kimconne_blog.* to 'kimconne_blog';"| mysql -uroot -p$rootPassword
      #ALTER USER 'kimconne_blog'@'localhost' IDENTIFIED BY 'SOMEPASSWORDHERE';
  - Files:
    - Assuming nfs:
      nfsShare=k8s
      nfsServer=10.10.10.5
      sharePath=/volume1/$nfsShare
      mountPoint=/mnt/$nfsShare
      sudo mkdir $mountPoint
      sudo mount -t nfs $nfsServer:$sharePath $mountPoint # Test mounting
      sudo mount | grep $nfsShare # validate mount
      # Assuming Kubernetes NFS
      # sudo mv /home/$(whoami)/Downloads/localhost.sql $mountPoint/path_to_default-data-sitename-mariadb/data/localhost.sql
      # sudo mv /home/$(whoami)/Downloads/public_html.tar.bz2 $mountPoint/public_html.tar.bz2
      bz2File=/mnt/k8s/kimconnectblog/public_html.tar.bz2
      containerPath=/mnt/k8s/default-kimconnectblog-wordpress-pvc-9f1dd4bd-81f3-489f-9b76-bf70f4fd291c/wordpress/wp-content
      tar -xf $bz2File -C $containerPath
      cd $containerPath
      mv public_html/wp-content wp-content
      vim wp-config.php # edit wp config to match the imported database and its prefix

Dynamics NFS Provisioning in Kubernetes Cluster

Step 1: Creating NFS Server

A. Create NFS Share on File Server
There are many ways to perform this task. Here’s an illustration of a manual method of enabling a standard Ubuntu server to serve as an NFS server.

Here’s a related blog with updated instructions: https://kimconnect.com/how-to-install-nfs-server-on-ubuntu-21-04/

# Install prerequisites:
apt install nfs-utils

# Create nfs share:
shareName=/export/kubernetes
sudo mkdir $shareName
sudo chown -R nobody: $shareName
sudo systemctl enable nfs-server
sudo systemctl start nfs-server
vim /etc/export
### Add this line
/export/kubernetes *(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
###
sudo exportfs -rav
sudo exportfs -v

B. Testing access from a client
# Install prerequisite
sudo apt install nfs-common
# Mount, create/delete a file, and unmount
# Set variables
nfsShare=kubernetes # assuming that the 'pihole' share has already been created on the server
nfsServer=192.168.100.21 # assuming NAS servername is resolved to its correct IP
sharePath=/volume1/$nfsShare
mountPoint=/mnt/$nfsShare
sudo mkdir $mountPoint
sudo mount -t nfs $nfsServer:$sharePath $mountPoint # Test mounting
sudo mount | grep $nfsShare
touch $mountPoint/test.txt
ls $mountPoint
rm $mountPoint/test.txt
ls $mountPoint
sudo umount -f -l $mountPoint # or sudo umount $mountPoint

Step 2a: Install Dynamic NFS Provisioner Using Helm

# Check current helm repo
kim@linux01:~$ helm repo list
NAME                            URL
bitnami                         https://charts.bitnami.com/bitnami
ingress-nginx                   https://kubernetes.github.io/ingress-nginx
rancher-stable                  https://releases.rancher.com/server-charts/stable
jetstack                        https://charts.jetstack.io
k8s-at-home                     https://k8s-at-home.com/charts/
nextcloud                       https://nextcloud.github.io/helm/
chrisingenhaag                  https://chrisingenhaag.github.io/helm/
wiremind                        https://wiremind.github.io/wiremind-helm-charts

# Add repo
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

# The easy way
nfsServer=192.168.100.21
nfsShare=/volume1/k8s
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
  --set nfs.server=$nfsServer \
  --set nfs.path=$nfsShare

# Sample output
NAME: nfs-subdir-external-provisioner
LAST DEPLOYED: Sun Aug  1 21:16:05 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

# Possible error:
Error: chart requires kubeVersion: >=1.9.0-0 <1.20.0-0 which is incompatible with Kubernetes v1.20.2

# Workaround: downgrade Kubernetes - not recommended!
version=1.20.0-00
sudo apt install -qy kubeadm=$version kubectl=$version kubelet=$version kubernetes-cni=$version --allow-downgrades

# If everything works out, storage class 'nfs-client' will become available
kim@linux01:~$ k get storageclasses.storage.k8s.io
NAME         PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-class    kubernetes.io/nfs                               Retain          Immediate           true                   181d
nfs-client   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   25m

# set default storage class
defaultStorageClassName=nfs-client
kubectl patch storageclass $defaultStorageClassName -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

# Check storage classes for the suffix '(default)'
kim@linux01:~$ kubectl get storageclass
NAME                   PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-class              kubernetes.io/nfs                               Retain          Immediate           true                   181d
nfs-client (default)   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   42m

# Test creating nfs claim
cat > test-pvc.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-pv1
spec:
  storageClassName: nfs-client # this variable must match the helm nfs-subdir-external-provisioner's default!
  accessModes:
     - ReadWriteMany
  resources:
    requests:
      storage: 500Mi
EOF
kubectl apply -f test-pvc.yaml

# Check result
kim@linux01:~$ k get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS   REASON   AGE
pvc-8ed4fc70-71c4-48c7-85a9-57175cfc21e7   500Mi      RWX            Delete           Bound    default/pvc-nfs-pv1        nfs-client              10s

kim@linux01:~$ k get pvc pvc-nfs-pv1
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs-pv1   Bound    pvc-8ed4fc70-71c4-48c7-85a9-57175cfc21e7   500Mi      RWX            nfs-client     91s

kim@linux01:~$ k delete -f test-pvc.yaml
persistentvolumeclaim "pvc-nfs-pv1" deleted

Step 2b: Manual Installation of Dynamics NFS Provisioner

# Pull the source code
workingDirectory=~/nfs-dynamic-provisioner
mkdir $workingDirectory && cd $workingDirectory
git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
cd nfs-subdir-external-provisioner/deploy

# Deploying the service accounts, accepting defaults
k create -f rbac.yaml

# Editing storage class
vim class.yaml

##############################################
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-ssd # set this value
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "true" # value of true means retaining data upon pod terminations
allowVolumeExpansion: "true" # this attribute doesn't exist by default
##############################################

# Deploying storage class
k create -f class.yaml

# Sample output
stoic@masternode:~/nfs-dynamic-provisioner/nfs-subdir-external-provisioner/deploy$ k get storageclasses.storage.k8s.io
NAME                   PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-ssd        k8s-sigs.io/nfs-subdir-external-provisioner     Delete          Immediate           false                  33s
nfs-class              kubernetes.io/nfs                               Retain          Immediate           true                   193d
nfs-client (default)   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   12d

# Example of patching an applied object
kubectl patch storageclass managed-nfs-ssd -p '{"allowVolumeExpansion":true}'
kubectl patch storageclass managed-nfs-ssd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' # Set storage class as default

# Editing deployment of dynamic nfs provisioning service pod
vim deployment.yaml

##############################################
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: X.X.X.X # change this value
            - name: NFS_PATH
              value: /nfs-share # change this value
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.100.93 # change this value
            path: /nfs-share # change this value
##############################################

# Creating nfs provisioning service pod
k create -f deployment.yaml

# Troubleshooting: example where the deployment was pending variables to be created by rbac.yaml
stoic@masternode: $ k describe deployments.apps nfs-client-provisioner
Name:               nfs-client-provisioner
Namespace:          default
CreationTimestamp:  Sat, 14 Aug 2021 00:09:24 +0000
Labels:             app=nfs-client-provisioner
Annotations:        deployment.kubernetes.io/revision: 1
Selector:           app=nfs-client-provisioner
Replicas:           1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
  Labels:           app=nfs-client-provisioner
  Service Account:  nfs-client-provisioner
  Containers:
   nfs-client-provisioner:
    Image:      k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
    Port:       <none>
    Host Port:  <none>
    Environment:
      PROVISIONER_NAME:  k8s-sigs.io/nfs-subdir-external-provisioner
      NFS_SERVER:        X.X.X.X
      NFS_PATH:          /nfs-share
    Mounts:
      /persistentvolumes from nfs-client-root (rw)
  Volumes:
   nfs-client-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    X.X.X.X
    Path:      /nfs-share
    ReadOnly:  false
Conditions:
  Type             Status  Reason
  ----             ------  ------
  Progressing      True    NewReplicaSetCreated
  Available        False   MinimumReplicasUnavailable
  ReplicaFailure   True    FailedCreate
OldReplicaSets:    <none>
NewReplicaSet:     nfs-client-provisioner-7768c6dfb4 (0/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  3m47s  deployment-controller  Scaled up replica set nfs-client-provisioner-7768c6dfb4 to 1

# Get the default nfs storage class
echo $(kubectl get sc -o=jsonpath='{range .items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")]}{@.metadata.name}{"\n"}{end}')

##### OLD NOTES: Feel free the ignore the below chicken scratch #######

# The less-easy way: manually install the provisioner
git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/
cd nfs-subdir-external-provisioner/deploy

NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
NAMESPACE=${NS:-default}
sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./rbac.yaml ./deployment.yaml
kubectl apply -f ./rbac.yaml

vim deployment.yaml
###
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-storage
            - name: NFS_SERVER
              value: 192.168.100.21
            - name: NFS_PATH
              value: /kubernetes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.100.21
            path: /kubernetes

k apply -f deployment.yaml

vim class.yaml
######
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: nfs-storage # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}" # waits for nfs.io/storage-path annotation, if not specified will accept as empty string.
  onDelete: delete

# Create Persistent Volume Claim
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    nfs.io/storage-path: "test-path" # not required, depending on whether this annotation was shown in the storage class description
spec:
  storageClassName: managed-nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

k apply -f class.yaml

kubectl create -f test-claim.yaml
kubectl create -f test-pod.yaml

How To Setup ClamAV Antivirus Scanner in Kubernetes

Assumptions:

  • A Kubernetes cluster is already setup
  • These are installed prior: Helm, MetalLB Load Balancer,
  • A static IP has already been excluded by the external DHCP server’s scope
  • Chosen IP also is within scope (IP Range) of the ConfigMap of metallb-system
# Installation
instanceName=clamav
helm repo add wiremind https://wiremind.github.io/wiremind-helm-charts
helm install $instanceName wiremind/clamav

# Set static IP for the service
appName=clamav
externalIPs=10.10.10.151
kubectl patch svc $appName -p '{"spec":{"externalIPs":["'$externalIPs'"]}}'

# Reverse static IP assignment
kubectl patch svc clamav -p '{"spec":{"externalIPs":[]}}'

# How to Uninstall
# helm uninstall clamav

# Application
# NextCloud's module can make use of this service: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/antivirus_configuration.html#configuring-clamav-on-nextcloud
Result:

rambo@masterbox:~$ helm install $instanceName wiremind/clamav
NAME: clamav
LAST DEPLOYED: Thu Jul 29 22:51:20 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. To connect to your ClamAV instance from outside the cluster execute the following commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=clamav,app.kubernetes.io/instance=clamav" -o jsonpath="{.items[0].metadata.name}")
  echo 127.0.0.1:3310
  kubectl port-forward $POD_NAME 3310:3310

bruce@masterbox:~$ k describe statefulsets.apps clamav 
Name:               clamav
Namespace:          default
CreationTimestamp:  Thu, 29 Jul 2021 22:51:22 +0000
Selector:           app.kubernetes.io/instance=clamav,app.kubernetes.io/name=clamav
Labels:             app.kubernetes.io/instance=clamav
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=clamav
                    app.kubernetes.io/version=1.8
                    helm.sh/chart=clamav-2.0.0
Annotations:        meta.helm.sh/release-name: clamav
                    meta.helm.sh/release-namespace: default
Replicas:           1 desired | 1 total
Update Strategy:    RollingUpdate
  Partition:        0
Pods Status:        1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app.kubernetes.io/instance=clamav
           app.kubernetes.io/name=clamav
  Containers:
   clamav:
    Image:        mailu/clamav:1.8
    Port:         3310/TCP
    Host Port:    0/TCP
    Liveness:     tcp-socket :clamavport delay=300s timeout=1s period=10s #success=1 #failure=3
    Readiness:    tcp-socket :clamavport delay=90s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /data from clamav-data (rw)
  Volumes:
   clamav-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
Volume Claims:  <none>
Events:         <none>

kim@masterbox:~$ k describe service clamav 
Name:              clamav
Namespace:         default
Labels:            app.kubernetes.io/instance=clamav
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=clamav
                   helm.sh/chart=clamav-2.0.0
Annotations:       meta.helm.sh/release-name: clamav
                   meta.helm.sh/release-namespace: default
Selector:          app.kubernetes.io/instance=clamav,app.kubernetes.io/name=clamav
Type:              ClusterIP
IP Families:       <none>
IP:                10.104.143.167
IPs:               10.104.143.167
Port:              clamavport  3310/TCP
TargetPort:        3310/TCP
Endpoints:         172.16.90.171:3310
Session Affinity:  None
Events:            <none>

PowerShell: Overcome Issues with Error 13932 in SCVMM When Refreshing Virtual Machines

Dealing with Clusters

# refreshCluster.ps1
# Function to refresh a cluster in VMM in anticipation of errors with unregistered SMB/CIFS shares

$clustername='cluster-101.kimconnect.com'
$runasAccount='domain\hyperv-admin'

function refreshCluster($clusterName,$runasAccount){    
  # Function to Register FileShare to a Cluster in SCVMM
  function registerFileShareToCluster{
    param(
      $clustername,
      $fileSharePath,
      $runasAccount
    )
    $ErrorActionPreference='Stop'
    try{
      Import-Module -Name "virtualmachinemanager"
      # Preempt this error
      # Error (26193)
      # No Run As account is associated with the host
      if($runasAccount){
        $runas = Get-SCRunAsAccount -Name $runasAccount
        $hostCluster = Get-SCVMHostCluster -Name $clustername
        Set-SCVmHostCluster -VMHostCluster $hostCluster -VMHostManagementCredential $runas
      }
      <# Got this error
      Set-SCVmHostCluster : A Hardware Management error has occurred trying to contact server
      :n:CannotProcessFilter :HRESULT 0x8033801a:No instance found with given property values. .
      WinRM: URL: [http://serverFQDN:5985], Verb: [INVOKE], Method: [AddToLocalAdminGroup], Resource:
      [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/scvmm/AgentManagement]
      (Error ID: 2927, Detailed Error: Unknown error (0x8033801a))
  
      Check that WinRM is installed and running on server. For more information use the command
      "winrm helpmsg hresult" and http://support.microsoft.com/kb/2742275.
  
      To restart the job, run the following command:
      PS> Restart-Job -Job (Get-VMMServer localhost | Get-Job | where { $_.ID -eq })
      At line:1 char:1
      + Set-SCVmHostCluster -VMHostCluster $hostCluster -VMHostManagementCred ...
      + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
          + CategoryInfo          : ReadError: (:) [Set-SCVMHostCluster], CarmineException
          + FullyQualifiedErrorId : 2927,Microsoft.SystemCenter.VirtualMachineManager.Cmdlets.SetHostClusterCmdlet
      #>
      <# preempt this error:
      Register-SCStorageFileShare : A parameter cannot be found that matches parameter name 'VMHostManagementCredential'.
      At line:1 char:87
      + ... ePath -VMHostCluster $hostCluster -VMHostManagementCredential $runasA ...
      +                                       ~~~~~~~~~~~~~~~~~~~~~~~~~~~
          + CategoryInfo          : InvalidArgument: (:) [Register-SCStorageFileShare], ParameterBindingException
          + FullyQualifiedErrorId : NamedParameterNotFound,Microsoft.SystemCenter.VirtualMachineManager.Cmdlets.RegisterSCSt
        orageFileShareCmdlet
      #>
      Register-SCStorageFileShare -FileSharePath $fileSharePath -VMHostCluster $hostCluster
      <# This error can safely be ignored
      Error (26233)
      Capacity/Free space cannot be calculated for \\NAS\CIFSSHARE. Failed to retrieve information with Win32 error code 64.
      #>  
  
      # This snippet is to register the SMB server onto VMM as a resource. It's optional
      # $servername='servername'
      # $shareName='test'
      # $addedShare = Get-SCStorageFileShare -Name "$servername\$sharename"
      # Register-SCStorageFileShare -StorageFileShare $addedShare -VMHostCluster $hostCluster
      # Set-SCVMHostCluster -RunAsynchronously -VMHostCluster $hostCluster -VMHostManagementCredential $runas
      return $true
    }catch{
      write-warning $_
      return $false
    }
  
  }
  # $clustername='CLUSTER-9999'
  # $fileSharePaths=@(
  #   '\\FILESERVER01\SHARE01',
  #   '\\FILESERVER01\SHARE02'
  # )
  # $runasAccount='domain\hyperv-admin'
  # foreach($fileSharePath in $fileSharePaths){  
  #   $success=registerFileShareToCluster $clusterName $fileSharePath $runasAccount
  #   if($success){
  #     write-host "$fileSharePath added successfully" -ForegroundColor Green
  #   }else{
  #     write-host "$fileSharePath was NOT added" -ForegroundColor Yellow
  #   }
  # }
  
  #Add-PSSnapin Microsoft.SystemCenter.VirtualMachineManager
  Import-Module -Name "virtualmachinemanager"
  #$guestVMs=Get-ClusterResource -cluster $clustername|?{$_.resourcetype.name -eq 'virtual machine'}
  $guestVMs=Get-ClusterGroup -Cluster $clustername|?{$_.GroupType -eq 'VirtualMachine'}
  foreach ($vm in $guestVMs){
      #[string]$vmName=$_.OwnerGroup.Name
      $vmName=$vm.Name
      $status=((Get-SCVirtualMachine -Name $vmName).StatusString|out-string).trim()
      if($status -notmatch 'Stopped|Running' -and !(!$status)){
          try{            
            try{
              Read-SCVirtualMachine $vmName -EA Stop
              write-host "$vmName refresh initiated." -ForegroundColor Green
            }catch [Microsoft.VirtualManager.Utils.CarmineException]{
              $errorMessage=$_
                $smbPath=[regex]::match($errorMessage,'\\\\(.*)\\').Value
                if($smbPath){
                    write-host "Add this SMB/CIFS path the cluster: $smbPath"
                    $smbRegistered=registerFileShareToCluster $clustername $smbPath $runasAccount
                    if($smbRegistered){
                      $null=Refresh-VM -VM $vmName -RunAsynchronously -Force;
                      write-host "$vmName refresh initiated." -ForegroundColor Yellow
                      #$null=Read-SCVirtualMachine -vm $vmName -Force -RunAsynchronously; # This statement missed 'stopped' VMs
                    }else{
                      write-warning "Unable to register $smbPath"
                    }
                }else{
                  write-host $errorMessage
                }
            }
          }catch{
            write-host $_
          }
      }else{
          write-host "$vmName status $(if($status){$status}else{'Unknown'})." -ForegroundColor Gray         
      }
    }
}
refreshCluster $clustername $runasAccount

Dealing with Individual Hyper-V Hosts

# refreshHost.ps1
$hostName='hyperv-2000.kimconnect.com'
$runasAccount='domain\hyperv-admin'

function refreshHost($hostname,$runasAccount){
  
  # Sub-routine to add Share Path to Hyper-V Host
  function addFileSharePathToHost($hostName,$sharePath,$runasAccount){
    try{
      $vmHost = Get-SCVMHost -ComputerName $hostName -EA Stop
      Register-SCStorageFileShare -FileSharePath $sharePath -VMHost $vmHost -EA Stop
      return $true
    }catch [Microsoft.VirtualManager.Utils.CarmineException]{
      $errorMessage=$_
      if($errorMessage -like "*Error ID: 26193*"){
        $runas = Get-SCRunAsAccount -Name $runasAccount
        Set-SCVmHost -VMHost $vmHost -VMHostManagementCredential $runas
        Register-SCStorageFileShare -FileSharePath $sharePath -VMHost $vmHost
        return $true
      }else{
        write-warning "$errorMessage"
        return $false
      }
    }catch{
      write-warning $_
      return $false
    }
    #Set-SCVMHost -VMHost $vmHost -RunAsynchronously -BaseDiskPaths $sharePath #-VMPaths "C:\ProgramData\Microsoft\Windows\Hyper-V"
  }
  
  $unsupportedSharedFiles=Get-SCVMHost $hostname | Get-SCVirtualMachine | ? {$_.Status -eq 'UnsupportedSharedFiles'} | Select Name,State,VMHost
  foreach($vmName in $unsupportedSharedFiles.Name){
    try{
      Read-SCVirtualMachine $vmName -EA Stop
      write-host "$vmName refresh initiated." -ForegroundColor Green
    }catch [Microsoft.VirtualManager.Utils.CarmineException]{
      $errorMessage=$_
        $smbPath=[regex]::match($errorMessage,'\\\\(.*)\\').Value
        if($smbPath){
            write-host "Add this SMB/CIFS path the cluster: $smbPath"
            $smbRegistered=addFileSharePathToHost $hostname $smbPath $runasAccount
            if($smbRegistered){
              $null=Refresh-VM -VM $vmName -RunAsynchronously -Force;
              write-host "$vmName refresh initiated." -ForegroundColor Yellow
              #$null=Read-SCVirtualMachine -vm $vmName -Force -RunAsynchronously; # This statement missed 'stopped' VMs
            }else{
              write-warning "Unable to register $smbPath"
            }
        }else{
          write-host $errorMessage
        }
    }
  }  
}
refreshHost $hostName $runasAccount

Kubernetes – Pausing Applications by Scaling Deployments or Stateful Sets

# Pause application
kubectl scale deploy nextcloud --replicas=0
kubectl scale statefulsets nextcloud-db-postgresql --replicas=0
kubectl scale deploy pihole --replicas=0

# Resume application
kubectl scale deploy nextcloud --replicas=1
kubectl scale statefulsets nextcloud-db-postgresql --replicas=1
kubectl scale deploy pihole --replicas=1

# Alternate for deployments scaling
kubectl scale deploy -n default --replicas=0 --all
kubectl scale deploy -n default --replicas=1 --all

Installing VMWare Tools on Linux Guest Virtual Machines

Installation Process

# Installing VMWare Tools
mkdir /mnt/cdrom
mount /dev/cdrom /mnt/cdrom
cp /mnt/cdrom/VMwareTools-*.tar.gz /tmp/
cd /tmp
tar -zxvf VMwareTools-*.tar.gz
cd vmware-tools-distrib
./vmware-install.pl

Sample Output

admin@testlinux:/tmp/vmware-tools-distrib# ./vmware-install.pl 
open-vm-tools packages are available from the OS vendor and VMware recommends 
using open-vm-tools packages. See http://kb.vmware.com/kb/2073803 for more 
information.
Do you still want to proceed with this installation? [no] yes

INPUT: [yes]

Creating a new VMware Tools installer database using the tar4 format.

Installing VMware Tools.

In which directory do you want to install the binary files? 
[/usr/bin] 

INPUT: [/usr/bin]  default

What is the directory that contains the init directories (rc0.d/ to rc6.d/)? 
[/etc] 

INPUT: [/etc]  default

What is the directory that contains the init scripts? 
[/etc/init.d] 

INPUT: [/etc/init.d]  default

In which directory do you want to install the daemon files? 
[/usr/sbin] 

INPUT: [/usr/sbin]  default

In which directory do you want to install the library files? 
[/usr/lib/vmware-tools] 

INPUT: [/usr/lib/vmware-tools]  default

The path "/usr/lib/vmware-tools" does not exist currently. This program is 
going to create it, including needed parent directories. Is this what you want?
[yes] 

INPUT: [yes]  default

In which directory do you want to install the common agent library files? 
[/usr/lib] 

INPUT: [/usr/lib]  default

In which directory do you want to install the common agent transient files? 
[/var/lib] 

INPUT: [/var/lib]  default

In which directory do you want to install the documentation files? 
[/usr/share/doc/vmware-tools] 

INPUT: [/usr/share/doc/vmware-tools]  default

The path "/usr/share/doc/vmware-tools" does not exist currently. This program 
is going to create it, including needed parent directories. Is this what you 
want? [yes] 

INPUT: [yes]  default

The installation of VMware Tools 10.3.22 build-15902021 for Linux completed 
successfully. You can decide to remove this software from your system at any 
time by invoking the following command: "/usr/bin/vmware-uninstall-tools.pl".

Before running VMware Tools for the first time, you need to configure it by 
invoking the following command: "/usr/bin/vmware-config-tools.pl". Do you want 
this program to invoke the command for you now? [yes] 

INPUT: [yes]  default

Initializing...

Segmentation fault

Making sure services for VMware Tools are stopped.

Stopping VMware Tools services in the virtual machine:
   Guest operating system daemon:                                      done
   VMware User Agent (vmware-user):                                    done
   Unmounting HGFS shares:                                             done
   Guest filesystem driver:                                            done


The installation status of vmsync could not be determined. 
Skippinginstallation.

The installation status of vmci could not be determined. Skippinginstallation.

The installation status of vsock could not be determined. Skippinginstallation.


The installation status of vmxnet3 could not be determined. 
Skippinginstallation.

The installation status of pvscsi could not be determined. 
Skippinginstallation.

The installation status of vmmemctl could not be determined. 
Skippinginstallation.

The VMware Host-Guest Filesystem allows for shared folders between the host OS 
and the guest OS in a Fusion or Workstation virtual environment.  Do you wish 
to enable this feature? [no] 

INPUT: [no]  default

The vmxnet driver is no longer supported on kernels 3.3 and greater. Please 
upgrade to a newer virtual NIC. (e.g., vmxnet3 or e1000e)

The vmblock enables dragging or copying files between host and guest in a 
Fusion or Workstation virtual environment.  Do you wish to enable this feature?
[no] 

INPUT: [no]  default


Skipping configuring automatic kernel modules as no drivers were installed by 
this installer.

Do you want to enable Guest Authentication (vgauth)? Enabling vgauth is needed 
if you want to enable Common Agent (caf). [yes] 

INPUT: [yes]  default

Do you want to enable Common Agent (caf)? [no] 

INPUT: [no]  default

No X install found.


Skipping rebuilding initrd boot image for kernel as no drivers to be included 
in boot image were installed by this installer.

Generating the key and certificate files.
Successfully generated the key and certificate files.
The configuration of VMware Tools 10.3.22 build-15902021 for Linux for this 
running kernel completed successfully.

You must restart your X session before any mouse or graphics changes take 
effect.

To enable advanced X features (e.g., guest resolution fit, drag and drop, and 
file and text copy/paste), you will need to do one (or more) of the following:
1. Manually start /usr/bin/vmware-user
2. Log out and log back into your desktop session
3. Restart your X session.

Found VMware Tools CDROM mounted at /mnt/cdrom. Ejecting device /dev/sr0 ...
No eject (or equivilant) command could be located.
Eject Failed:  If possible manually eject the Tools installer from the guest 
cdrom mounted at /mnt/cdrom before canceling tools install on the host.
Enjoy,

--the VMware team

Uninstallation Process

# Uninstalling VMWare Tools
cd
rm /tmp/VMwareTools-*.tar.gz
rm -rf /tmp/vmware-tools-distrib