Microsoft Dynamics Sluggish CRM Records Creation – Slow to Update Views

Symptom:

Event Logs with repeated entries…

Log Name: Application
Source: MSCRMAsyncService
Date: 11/15/2021 01:55:49 AM
Event ID: 25349
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: CRM02.kimconnect.com
Description:
Async backlog is detected in the Queue: AsyncOperation for organization. OrganizationId: {SomeID} Sample 1: Number of Jobs Backlogged: 1, Max Latency Observed: 25847 seconds Sample 2: Number of Jobs Backlogged: 1, Max Latency Observed: 26748 seconds Sample 3: Number of Jobs Backlogged: 1, Max Latency Observed: 27648 seconds Sample 4: Number of Jobs Backlogged: 1, Max Latency Observed: 28548 seconds
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="MSCRMAsyncService" />
<EventID Qualifiers="49152">25349</EventID>
<Level>2</Level>
<Task>0</Task>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2021-11-16T01:55:49.056151900Z" />
<EventRecordID>102675119</EventRecordID>
<Channel>Application</Channel>
<Computer>CRM02.kimconnect.com</Computer>
<Security />
</System>
<EventData>
<Data>AsyncOperation</Data>
<Data>SOMEID</Data>
<Data>1</Data>
<Data>25847</Data>
<Data>1</Data>
<Data>26748</Data>
<Data>1</Data>
<Data>27648</Data>
<Data>1</Data>
<Data>28548</Data>
</EventData>
</Event>
Resolution:

Option A: Update the status of all the emails with a status of ‘Pending Send’ to ‘Cancelled’
Option B: Ensure that the email router account is valid and sending emails smoothly
Option C: Reboot Frontend (Web Tier) and Backend (SQL, Async) servers

How to Know if Your Colleague is a God (Like Thor)?

Sometimes, I sit back and watch the mind games in politics, companies, churches, and even small families… Here’re my observations, abstracted from a Movie:

Besides Thor and Captain America, there is one other entity that could lift Thor’s hammer (Mjolnir?), and his name is Vision. When watching that scene, I’ve thought to myself, ‘how does one determine the worthiness of a person?’ … If it were up to me, I would devise a structured approach in measuring a person’s multi-faceted character thresholds as meta-psychological evals are hard to grasp without objective data. Here are 10 items that I doubt Marvel Comics writers have had considered:

1. Moral compass: bring up very hard moral topics (e.g. abortion due to rape, sacrifice one child to save several)
2. Compassion: observe facial expression of the subject when confronted with someone else’s tragedy
3. Incorruptibility: test on whether person would cheat, steal, or abuse of power if given an opportunity
4. Trustworthiness: check the reliability of timeliness & delivery basing on his/her own commitments
5. Flexibility: force one to come up with an impossible solution such as the ‘Kobayashi Maru’ test.
6. Resiliency: how does the subject behave when defeated in a game, in an logical argument, or in a fight? Another method would be to temporarily restrict blood circulation to this person’s brain while he performs a difficult test, under duress (e.g. place tourniquets on the man’s limbs, then make him solve math problems or dodge rubber bullets)
7. Cleverness: how about a standardized IQ test? If not, present a problem that requires intelligence to solve (what’s the distance of 2 vehicles after 30 minutes if the difference of their speed is 50% and their directions are opposite? No scratch papers allowed as the solution would be a math equation rather than a static value)
8. Subconscious personality: give this person plenty of alcohol (or sodium thiopental)
9. Decisiveness: how long does one require to make a simple, complex, and very complex decision of high quality?
10. Vices: does the person have addiction to any of these 4 dopamine abuses (a) gambling (b) drugs/alcohol (c) sex (d) video games

Using Microsoft Virtual Machine Manager (VMM) to Create Private Clouds

Step 1: Create a New Cloud Instance
Preparation:

Create a new Active Directory Group (‘Test Cloud Administrators’) and a new user (‘vmmtest’)

$groupName='Test Cloud Administrators'
$samAccountName='TestCloudAdministrators'
$container="CN=Users,DC=Intranet,DC=KIMCONNECT,DC=Com"

New-ADGroup -Name $groupName -GroupCategory Security -GroupScope Global -DisplayName $groupName -Path $container -Description $groupName # -SamAccountName $samAccountName
$groupName='Test Cloud Administrators'
$newUsername='vmmtest'
$encryptedPassword=Read-Host -AsSecureString "Input User Password for account $newUsername"
New-ADUser -Name $newUsername -Enabled $True -AccountPassword $encryptedPassword
Add-ADGroupMember
-Identity $groupName -Members $newUsername
$groupName='Test VMM Read-only Admins'
$samAccountName=$groupName -replace ' ','_'
$container="CN=Users,DC=Intranet,DC=KIMCONNECT,DC=Com"

New-ADGroup -Name $groupName -GroupCategory Security -GroupScope Global -DisplayName $groupName -Path $container -Description $groupName # -SamAccountName $samAccountName
$groupName='Test VMM Read-only Admins'
$newUsername='VMM_Test_Admin_RO'
$encryptedPassword=Read-Host -AsSecureString "Input User Password for account $newUsername"
New-ADUser -Name $newUsername -Enabled $True -AccountPassword $encryptedPassword
Add-ADGroupMember -Identity $groupName -Members $newUsername

Grant ‘Test Cloud Administrators’ Group RDP access to VMM Server:

$groupEntity='Intranet\Test Cloud Administrators'
Add-LocalGroupMember -Group 'Remote Desktop Users' -Member $groupEntity
$groupEntity='Intranet\Test VMM Read-only Admins'
Add-LocalGroupMember -Group 'Remote Desktop Users' -Member $groupEntity
Use VMM To Create New Clouds

Start Virtual Machine Manager > right-click Clouds > select ‘Create Cloud’ to initiate the Create Cloud Wizard > Input a name for this new cloud (e.g. ‘Private Cloud 1’ or ‘Test Cloud’) > Next

Put a check mark next to the appropriate container > Next

Select the appropriate Network > Next

If necessary, select the appropriate NLB > Click Next

If necessary, select appropriate template > Next

If necessary, select the appropriate port classification > Next

Select the appropriate storage > Next

Click Browse to select an appropriate Stored VM Path > if necessary, click Add to select a read-only library shares (this must be a unique path)

Review the storage path and library shares > click Next when ready

Set appropriations of CPU, Memory, and Storage resources > Next

Select the available capability profile(s) > Next

If necessary, select the replication groups > Next

Pick an appropriate QoS policy > Next

Review the summary > click Finish when done

Possible Error:

---------------------------
Virtual Machine Manager
---------------------------
The specified path '\\FILESERVER\MSSCVMMLibrary' is not unique.

Ensure that the path or part of the path that you provided is not used as a writable library share path on a private cloud, a read-only share path on a private cloud, or a user role data path on a self-service user role.

ID: 23505
---------------------------
OK
---------------------------

Workaround: removed read-only library shares

Observe the Jobs window for the Cloud Creation progress

When the wizard has completed, a new Cloud item would appear as an icon under the Clouds tab

Performing the same steps via Scripting (obtained from ‘view script’ button):

Set-SCCloudCapacity -JobGroup "74b6-462e-877e" -UseCustomQuotaCountMaximum $true -UseMemoryMBMaximum $false -UseCPUCountMaximum $false -UseStorageGBMaximum $false -UseVMCountMaximum $true -MemoryMB 524288 -CPUCount 50 -StorageGB 6000

$resources = @()
$resources += Get-SCLogicalNetwork -ID "92d8-4678-a429"

$resources += Get-SCStorageClassification -ID "f9f9-4d3f-80c6"


$addCapabilityProfiles = @()
$addCapabilityProfiles += Get-SCCapabilityProfile -Name "Hyper-V"

Set-SCCloud -JobGroup "74b6-462e-877e" -RunAsynchronously -ReadWriteLibraryPath "\\VMMSERVER\MSSCVMMLibrary\Templates" -AddCloudResource $resources -AddCapabilityProfile $addCapabilityProfiles

$hostGroups = @()
$hostGroups += Get-SCVMHostGroup -ID "fa00-47f0-a451"
New-SCCloud -JobGroup "74b6-462e-877e" -VMHostGroup $hostGroups -Name "Test Cloud" -Description "" -RunAsynchronously
Step 2: Create a Role Based Access Control

Please note that this section is to create a ‘VM Administrator’ role. This is only available in Windows 2019 Server’s Virtual Machine Manager (VMM). This role has a broader scope of access as compared to ‘Tenant Administrator, which may be more fitting to grant limited self-service guest VM administrator level access to ‘virtual clouds’ without full visibility into the cluster. Therefore, these steps should only be observed as informational as it is more advisable to peruse the ‘Tenant Administrator’ RBAC in most scenarios.

To create an RBAC role for VM administrator, go to Settings > right-click  User Roles > Create User Role

Type in the name as ‘Test Cloud Administrator’ > Next

Select ‘Virtual Machine Administrator’ > Next

Click Add > select Active Directory Users or Groups > OK > Next

Narrow down the scope (e.g. ‘Test Cloud’) > Next

Put a check mark to each desired permissions (as listed below) > Next

Role Based Access Controlled Virtual Machine Administrator Permissions:
- Checkpoint: Create and manage virtual machine checkpoints
- Checkpoint (Restore only): Restore to but cannot create virtual machine checkpoints
- Deploy: Create virtual machines and service from VHDs or templates
- Deploy (From template only): Create virtual machines and services form templates only
- Deploy shielded: Create shielded vitual machines
- Local Administrator: Grants local administrator rights on virtual machines
- Manage Azure Profiles: Create and Manage Azure Profiles
- Migrate virtual Machine and Storage: Migrate Virtual Machine acress Hosts and Clouds and storage of Virtual Machines
- Pause and resume: Pause and Resume virtual machines and services
- Receive: Receive resources from other self-service users
- Remote connection: Remotely connect to virtual machines
- Remove: Remove virtual machines and services
- Save: Save virtual machines and services
- Share: Share resources with other self-service users
- Shutdown: Shut down virtual machines
- Start: Start virtual machines and services
- Stop: Stop virtual machines and services
- Store and re-deploy: Store virtual machines in the library, and re-deploy those virtual machines
- Update VM functional level: Update Functional Level of the Virtual Machines

Add Library Servers (if required) > Next > Add ‘Run As Accounts’ (if required) > Next > Finish

Creating VM Administrator RBAC via Scripting:

$cloudsToAdd_0 = Get-SCCloud -ID "4cbb-4643-9bf9"
Add-SCUserRolePermission -Cloud $cloudsToAdd_0 -JobGroup "37f5-4362-84c8"
$scopeToAdd = @()
$scopeToAdd += Get-SCCloud -ID "4cbb-4643-9bf9"
Set-SCUserRole -JobGroup "37f5-4362-84c8" -AddMember @("INTRANET\TestAdmins") -AddScope $scopeToAdd -Permission @("Checkpoint", "CheckpointRestoreOnly", "CreateFromVHDOrTemplate", "Create", "AllowLocalAdmin", "MigrateVM", "PauseAndResume", "Shutdown", "Start", "Stop", "UpdateVMFunctionalLevel")
New-SCUserRole -Name "Test Cloud Administrator" -UserRoleProfile "VMAdmin" -Description "" -JobGroup "37f5-4362-84c8"
Step 3: Associating Guest VMs to Virtual Clouds

Note: assigning VMs into individual clouds are only possible if the Cloud entity has been associated with a Host Group that contains online Hyper-V Servers or Clusters.

To associate individual virtual machines (VM’s) toward a particular ‘cloud’, one would run Virtual Machine Manager (VMM) > select VMs and Services > locate a desired VM > right-click that VM > Properties > select General Tab > pick the correct cloud name in the drop-down menu > OK to save

Once a VM has been configured toward a Cloud, it would be visible when that Cloud is selected

Bonus Materials: VMM User Roles Summary

Source: Microsoft

ROLE BASED SECURITY
VMM user role Permissions Details
Administrator role Members of this role can perform all administrative actions on all objects that VMM manages. Only administrators can add a WSUS server to VMM to enable updates of the VMM fabric through VMM.
Virtual Machine Administrator (applicable for VMM 2019 and later) Administrators can create the role. Delegated Administrator can create VM administrator role that includes entire scope or a subset of their scope, library servers and Run-As accounts.
Fabric Administrator (Delegated Administrator) Members of this role can perform all administrative tasks within their assigned host groups, clouds, and library servers. Delegated Administrators cannot modify VMM settings, add or remove members of the Administrators user role, or add WSUS servers.
Read-Only Administrator Members of this role can view properties, status, and job status of objects within their assigned host groups, clouds, and library servers, but they cannot modify the objects. The read-only administrator can also view Run As accounts that administrators or delegated administrators have specified for that read-only administrator user role.
Tenant Administrator Members of this role can manage self-service users and VM networks. Tenant administrators can create, deploy, and manage their own virtual machines and services by using the VMM console or a web portal.

Tenant administrators can also specify which tasks the self-service users can perform on their virtual machines and services.

Tenant administrators can place quotas on computing resources and virtual machines.
Application Administrator (Self-Service User) Members of this role can create, deploy, and manage their own virtual machines and services.

They can manage VMM using the VMM console.

How To Create a Virtual Machine Administrator Role in SCVMM

Update: A new write-up has been posted with screenshots here.

Virtual Machine Manager (VMM) 2019 includes a new role, ‘VM administrator.’ This RBAC provides just enough permissions for read-only visibility into the fabric of the data center, but prevents escalation of privilege to fabric administration. (Source: Microsoft)

To create an RBAC role for VM administrator, go to Settings > User Roles > Create User Role > type in the name as ‘Virtual Machine Administrator’ > Next > select ‘Virtual Machine Administrator’ > Next > click Add > select Active Directory Users or Groups > OK > Next > narrow down the scope (e.g. ‘All Hosts’) > Next > put a check mark to each desired permissions (as listed below) > Next > Add Library Servers > Next > Add ‘Run As Accounts’ > Next > Finish

Virtual Machine Administrator Permissions:
- Migrate virtual machine (recommended)
- Migrate VM Storage (recommended)
- Pause and resume (recommended)
- Receive
- Remote connection
- Remove
- Save
- Share
- Shutdown (recommended)
- Start (recommended)
- Stop (recommended)
- Store and re-deploy
- Update VM functional level (recommended)

An Experience in Upgrading Synology SSD Cache Drives

This is a quick note to myself so that I wouldn’t repeat the same mistake (due to lack of knowledge) when upgrading a Synology SSD cache on a particular volume.

Apparently, the proper procedure to change SSD caching drives would be:

  1. Remove the existing SSD caching: 
    Open Storage Manager > Storage > expand the desired volume (e.g. volume 1) > click on the three dots ‘…’ associated with the SSD Cache volume > select Remove > wait until the removal process completes
    Synology cache drive removal
  2. Turn off Synology
  3. Physically replace the old SSD drives with new ones
  4. Turn Synology back on
  5. Assign the new caching SSDs toward the desired volumes with one of these options:
    – Read-only: optimized for websites
    – Read-write: optimized for databases and virtual machines

Here are some sample error messages when the above procedure has been executed out of order:

The read-write cache of Volume 1 on SYNOLOGY02 is missing. Please power off your Synology NAS first, and make sure that the SSD used by the SSD Cache is plugged in and then reboot your device.

From SYNOLOGY02
Volume 1 on SYNOLOGY02 has crashed. It is possible that more files may be corrupted if this volume is still used. Please go to Storage Manager > Storage for more information.

From SYNOLOGY02

 

Server Decommissioning Procedure

  1. Migrate application or services of old server to new server
  2. Obtain approval from application owner or business unit to decommission machine
  3. Shutdown old server and monitor the process for 30 days
  4. Validate that original application or service is running properly on the new server without dependency of old server
  5. Delete old server’s hostname(s) and DNS record(s) from active directory
  6. Delete IP reservation for old server from DHCP reservations, if necessary
  7. If server or application class falls within retention policy definition, make backups of data and/or machine files as necessary
  8. Remove physical hardware, or delete virtual machine (VM)
  9. Validate that the machine instance has been purged in DNS, DHCP, AD

Considerations in Granting Access to Helpdesk Users via Group ‘Account Operators’

One consideration is to add Helpdesk users into the ‘Account Operators’ group. This would effectively grant limited account creation privileges to those personnel. Members of this group can administer many types of accounts, including users, local, and global groups. Operators could also log on to domain controllers. Overall, this is a rather high level of access.

Account Operators “can create and manage users and groups in the domain, but it cannot manage service administrator accounts. As a best practice, do not add members to this group, and do not use it for any delegated administration.” (source: https:// docs.microsoft.com/en-us/previous-versions/tn-archive/cc875827(v=technet.10)?redirectedfrom=MSDN#XSLTsection124121120120).

Therefore, Administrators are advised to create a custom AD group for this purpose. I’ve written an article toward this topic here (https://kimconnect.com/active-directory-helpdesk-admins-group-creation/)

Office 365 Email Security for SMTP Relays

Error message:

Unable to read data from the transport connection: net_io_connectionclosed.

Troubleshooting steps:

  1. Ensure that TSL1.2 is being used as Windows 2016 & older may default to TLS1.1

    [Net.ServicePointManager]::SecurityProtocol=[Net.SecurityProtocolType]::Tls12
  2. This new error message resulted
    The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.7.57
    Client not authenticated to send mail. Error: 535 5.7.139 Authentication unsuccessful, SmtpClientAuthentication is
    disabled for the Tenant. Visit https://aka.ms/smtp_auth_disabled for more information.
    [SJ0PR03CA0167.namprd03.prod.outlook.com]

3. Use a browser to login and check the email account

Notice that 2nd-factor advisories are being displayed…

Attempted to navigate to https:// admin.microsoft.com/Adminportal/Home to realize that the provided account is not a member of the Global Administrators…

4. Advise client to have the organization administrator perform this task:

  1. Open the Microsoft 365 admin center (https:// admin.microsoft.com/) and go to Users > Active users > Select the user, and in the flyout that appears, click Mail > In the Email apps section, click Manage email apps > Verify the Authenticated SMTP setting: unchecked = disabled, checked = enabled (preferred)

  2. Sign in to the Microsoft 365 admin center (https:// admin.microsoft.com/adminportal) using a security administrator , Conditional Access administrator, or Global admin credentials > In the left pane, select Show All > under Admin centers, select Azure Active Directory > In the left pane of the Azure Active Directory admin center, select Azure Active Directory > From the left menu of the Dashboard, in the Manage section, select Properties > Manage Security defaults > set Enable Security defaults = No > Save changes

    This is to bypass this prompt:Microsoft has enabled Security Defaults to keep your account secure. Learn more about the benefits of Security Defaults
    Skip for now (14 days until this is required) Use a different account”

    Before SMTP option has been changed from disabled to enabled:

    The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.7.57
    Client not authenticated to send mail. Error: 535 5.7.139 Authentication unsuccessful, SmtpClientAuthentication is
    disabled for the Tenant. Visit https://aka.ms/smtp_auth_disabled for more information.

    After:

    The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.7.57
    Client not authenticated to send mail. Error: 535 5.7.139 Authentication unsuccessful, the request did not meet the
    criteria to be authenticated successfully. Contact your administrator. [BYAPR05CA0042.namprd05.prod.outlook.com]

An Exercise in Discover Whether an Active Directory Account Has RDP Access to Windows Bastion Hosts

Check Computers:

$computernames='RDPSERVER01','RDPSERVER02','RDPSERVER03'
invoke-command -computername $computernames {get-localgroupmember 'remote desktop users'}|select PSComputername,Name
# Sample output
PS C:\Windows\system32> invoke-command -computername @('RDPSERVER01','RDPSERVER02','RDPSERVER03') {get-localgroupmember 'remote desktop users'}|select PSComputername,Name

PSComputerName Name
-------------- ----
RDPSERVER01   KIMCONNECT\Domain Admins
RDPSERVER01   KIMCONNECT\Bastion RDP
RDPSERVER02   KIMCONNECT\Domain Admins
RDPSERVER02   KIMCONNECT\Bastion RDP
RDPSERVER03   KIMCONNECT\Domain Admins
RDPSERVER03   KIMCONNECT\Bastion RDP

Check User Account:

$username='kimconnect'
Get-ADUser $username -Properties *|select SamAccountName,Name,BadLogonCount,LastLogonDate,LockedOut,MemberOf,Modified,PasswordExpired,PasswordLastSet
# Sample output
PS C:\Windows\system32> Get-ADUser $username -Properties *|select SamAccountName,Name,BadLogonCount,LastLogonDate,LockedOut,MemberOf,Modified,PasswordExpired,PasswordLastSet

SamAccountName  : kimconnect
Name            : Kim Connect
BadLogonCount   : 2
LastLogonDate   : 10/13/2010 1:41:45 AM
LockedOut       : False
MemberOf        : {CN=Bastion RDP,DC=kimconnect,DC=com}
Modified        : 10/13/2010 1:41:53 AM
PasswordExpired : False
PasswordLastSet : 10/13/2010 1:41:45 AM

Indications that Chocolatey is locked down

# System's version is less than vendor's current (Chocolatey v0.10.15)
0+000+00[LAX-WEB005]: PS C:\Users\testadmin\Documents> choco source
Please run chocolatey /? or chocolatey help - chocolatey v0.9.8.28

# Server unable to reach chocolatey.org
[LAX-WEB005]: PS C:\Users\testadmin\Documents> wget https://chocolatey.org/api/v2/
The request was aborted: Could not create SSL/TLS secure channel.
    + CategoryInfo          : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest],WebException
    + FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand

What to do?

When a system has outbound firewall access control lists (ACL’s) blocking HTTP outbound as shown above, one may assume that it’s as intended. Proper protocol when attempting to make changes to these sorts of systems would be to obtain authorization from systems owner. Once approved, one may proceed to apply the knowledge from this documentation (https:// docs.chocolatey.org/en-us/features/host-packages) to administer this system.

Note: when I have time, I may provide practical examples on how to deploy & admin private Chocolatey repos…

How To Upgrade NextCloud 22.1.1 to 22.2.0 When Deployed with Kubernetes & Helm

Step 1:

Navigate to nextcloud > html > edit version.php

<?php 
$OC_Version = array(22,1,1,2);
$OC_VersionString = '22.1.1';
$OC_Edition = '';
$OC_Channel = 'stable';
$OC_VersionCanBeUpgradedFrom = array (
  'nextcloud' => 
  array (
    '21.0' => true,
    '22.0' => true,
    '22.1' => true,
    '22.2' => true,   # Add this line 
  ),
  'owncloud' => 
  array (
    '10.5' => true,
  ),
);
$OC_Build = '2021-08-26T13:27:46+00:00 1eea64f2c3eb0e110391c24830cea5f8d9c3e6a1';
$vendor = 'nextcloud';

Step 2: Run the ‘helm upgrade…’ command with the desired NextCloud version

# Example:
helm upgrade nextcloud nextcloud/nextcloud \
  --set image.tag=22.2.0-fpm \
  --set nginx.enabled=true \
  --set nextcloud.host=dragoncoin.com \
  --set nextcloud.username=dragon,nextcloud.password=SOMEVERYCOMPLEXANDVERYVERYLONGPASSWORD \
  --set internalDatabase.enabled=false \
  --set externalDatabase.existingSecret.enabled=true \
  --set externalDatabase.type=postgresql \
  --set externalDatabase.host='nextcloud-db-postgresql.default.svc.cluster.local' \
  --set persistence.enabled=true \
  --set persistence.existingClaim=nextcloud-claim \
  --set persistence.size=100Ti \
  --set livenessProbe.enabled=false \
  --set readinessProbe.enabled=false \
  --set nextcloud.phpConfigs.upload_max_size=40G \
  --set nextcloud.phpConfigs.upload_max_filesize=40G \
  --set nextcloud.phpConfigs.post_max_size=40G \
  --set nextcloud.phpConfigs.memory_limit=80G

Step 3: Check the logs and wait for the upgrading process to complete

Previous pods terminated to make way for new pods

admin@controller:~$ k get pod
NAME                                              READY   STATUS        RESTARTS   AGE
nextcloud-67855fc94c-lc2xr                        0/2     Terminating   0          74m
nextcloud-db-postgresql-0                         1/1     Running       0          91m
admin@controller:~$ k get pod
NAME                                              READY   STATUS    RESTARTS   AGE
nextcloud-79b5b775fd-2s4bj                        2/2     Running   0          56s
nextcloud-db-postgresql-0                         1/1     Running   0          92m

Expected 502 errors during pod upgrades

admin@controller:~$ k logs nextcloud-79b5b775fd-2s4bj nextcloud-nginx
2021/11/01 05:36:49 [error] 32#32: *24 connect() failed (111: Connection refused) while connecting to upstream, client: 10.10.0.95, server: , request: "GET /status.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "dragoncoin.com"
10.10.0.95 - dragon [01/Nov/2021:05:36:49 +0000] "GET /status.php HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Linux) mirall/3.2.2git (build 5903) (Nextcloud, linuxmint-5.4.0-89-generic ClientArchitecture: x86_64 OsArchitecture: x86_64)" "192.168.0.164"

Logs showing that the upgrading process has progressed… and eventually completed

admin@controller:~$ kubectl logs nextcloud-79b5b775fd-2s4bj nextcloud

Initializing nextcloud 22.2.0.2 ...
Upgrading nextcloud from 22.1.1.2 ...
Initializing finished
Nextcloud or one of the apps require upgrade - only a limited number of commands are available
You may use your browser or the occ upgrade command to do the upgrade
Setting log level to debug
Turned on maintenance mode
Updating database schema
Updated database
Updating <lookup_server_connector> ...
Updated <lookup_server_connector> to 1.10.0
Updating <oauth2> ...
Updated <oauth2> to 1.10.0
Updating <files> ...
Updated <files> to 1.17.0
Updating <cloud_federation_api> ...
Updated <cloud_federation_api> to 1.5.0
Updating <dav> ...
Fix broken values of calendar objects

 Starting ...

Updated <dav> to 1.19.0
Updating <files_sharing> ...
Updated <files_sharing> to 1.14.0
Updating <files_trashbin> ...
Updated <files_trashbin> to 1.12.0
Updating <files_versions> ...
Updated <files_versions> to 1.15.0
Updating <sharebymail> ...
Updated <sharebymail> to 1.12.0
Updating <workflowengine> ...
Updated <workflowengine> to 2.4.0
Updating <systemtags> ...
Updated <systemtags> to 1.12.0
Updating <theming> ...
Updated <theming> to 1.13.0
Updating <accessibility> ...
Migrate old user config

    0/0 [>---------------------------]   0% Starting ...
    0/0 [->--------------------------]   0%
 Starting ...

Updated <accessibility> to 1.8.0
Updating <contactsinteraction> ...
Updated <contactsinteraction> to 1.3.0
Updating <federatedfilesharing> ...
Updated <federatedfilesharing> to 1.12.0
Updating <provisioning_api> ...
Updated <provisioning_api> to 1.12.0
Updating <settings> ...
Updated <settings> to 1.4.0
Updating <twofactor_backupcodes> ...
Updated <twofactor_backupcodes> to 1.11.0
Updating <updatenotification> ...
Updated <updatenotification> to 1.12.0
Updating <user_status> ...
Updated <user_status> to 1.2.0
Updating <weather_status> ...
Updated <weather_status> to 1.2.0
Checking for update of app accessibility in appstore
Checked for update of app "accessibility" in App Store
Checking for update of app activity in appstore
Checked for update of app "activity" in App Store
Checking for update of app audioplayer in appstore
Checked for update of app "audioplayer" in App Store
Checking for update of app breezedark in appstore
Checked for update of app "breezedark" in App Store
Checking for update of app bruteforcesettings in appstore
Checked for update of app "bruteforcesettings" in App Store
Checking for update of app camerarawpreviews in appstore
Checked for update of app "camerarawpreviews" in App Store
Checking for update of app cloud_federation_api in appstore
Checked for update of app "cloud_federation_api" in App Store
Checking for update of app cms_pico in appstore
Checked for update of app "cms_pico" in App Store
Checking for update of app contactsinteraction in appstore
Checked for update of app "contactsinteraction" in App Store
Checking for update of app dav in appstore
Checked for update of app "dav" in App Store
Checking for update of app documentserver_community in appstore
Checked for update of app "documentserver_community" in App Store
Checking for update of app drawio in appstore
Checked for update of app "drawio" in App Store
Checking for update of app external in appstore
Checked for update of app "external" in App Store
Checking for update of app federatedfilesharing in appstore
Checked for update of app "federatedfilesharing" in App Store
Checking for update of app files in appstore
Checked for update of app "files" in App Store
Checking for update of app files_antivirus in appstore
Checked for update of app "files_antivirus" in App Store
Checking for update of app files_markdown in appstore
Checked for update of app "files_markdown" in App Store
Checking for update of app files_mindmap in appstore
Checked for update of app "files_mindmap" in App Store
Checking for update of app files_pdfviewer in appstore
Checked for update of app "files_pdfviewer" in App Store
Checking for update of app files_rightclick in appstore
Checked for update of app "files_rightclick" in App Store
Checking for update of app files_sharing in appstore
Checked for update of app "files_sharing" in App Store
Checking for update of app files_trashbin in appstore
Checked for update of app "files_trashbin" in App Store
Checking for update of app files_versions in appstore
Checked for update of app "files_versions" in App Store
Checking for update of app files_videoplayer in appstore
Checked for update of app "files_videoplayer" in App Store
Checking for update of app forms in appstore
Checked for update of app "forms" in App Store
Checking for update of app logreader in appstore
Checked for update of app "logreader" in App Store
Checking for update of app lookup_server_connector in appstore
Checked for update of app "lookup_server_connector" in App Store
Checking for update of app maps in appstore
Checked for update of app "maps" in App Store
Checking for update of app music in appstore
Checked for update of app "music" in App Store
Checking for update of app news in appstore
Checked for update of app "news" in App Store
Checking for update of app notifications in appstore
Checked for update of app "notifications" in App Store
Checking for update of app oauth2 in appstore
Checked for update of app "oauth2" in App Store
Checking for update of app password_policy in appstore
Checked for update of app "password_policy" in App Store
Checking for update of app photos in appstore
Checked for update of app "photos" in App Store
Checking for update of app privacy in appstore
Checked for update of app "privacy" in App Store
Checking for update of app provisioning_api in appstore
Checked for update of app "provisioning_api" in App Store
Checking for update of app quicknotes in appstore
Checked for update of app "quicknotes" in App Store
Checking for update of app recommendations in appstore
Checked for update of app "recommendations" in App Store
Checking for update of app registration in appstore
Checked for update of app "registration" in App Store
Checking for update of app richdocuments in appstore
Checked for update of app "richdocuments" in App Store
Checking for update of app serverinfo in appstore
Checked for update of app "serverinfo" in App Store
Checking for update of app settings in appstore
Checked for update of app "settings" in App Store
Checking for update of app sharebymail in appstore
Checked for update of app "sharebymail" in App Store
Checking for update of app spreed in appstore
Checked for update of app "spreed" in App Store
Checking for update of app support in appstore
Checked for update of app "support" in App Store
Checking for update of app survey_client in appstore
Checked for update of app "survey_client" in App Store
Checking for update of app systemtags in appstore
Checked for update of app "systemtags" in App Store
Checking for update of app tasks in appstore
Checked for update of app "tasks" in App Store
Checking for update of app text in appstore
Checked for update of app "text" in App Store
Checking for update of app theming in appstore
Checked for update of app "theming" in App Store
Checking for update of app twofactor_backupcodes in appstore
Checked for update of app "twofactor_backupcodes" in App Store
Checking for update of app updatenotification in appstore
Checked for update of app "updatenotification" in App Store
Checking for update of app user_status in appstore
Checked for update of app "user_status" in App Store
Checking for update of app video_converter in appstore
Checked for update of app "video_converter" in App Store
Checking for update of app viewer in appstore
Checked for update of app "viewer" in App Store
Checking for update of app weather_status in appstore
Checked for update of app "weather_status" in App Store
Checking for update of app workflowengine in appstore
Checked for update of app "workflowengine" in App Store
Starting code integrity check...

After about 5 minutes (depending on the system hardware), NextCloud should be rendered back online. At this point, the upgrade has completed.

Kubernetes Ingress Error 502 Upon NextCloud Upgrades

Issue:
Just the other day, I’ve attempted to run a ‘helm upgrade…’ command on my NextCloud application. I’ve taken care to ensure that the container’s version matches that of the persistent storage’s marking (e.g. image.tag=22.1-fpm) as a variance in that would cause NextCloud not to start. However, there’s another issue that has puzzled me: a 502 Error upon navigating to the URL of the application.

Resolution:
– Check the logs
– Review Kubernetes Ingress documentation
– Realize that this specific issue requires no fixing

Checking the logs:

admin@controller:~$ k logs nextcloud-67855fc94c-lc2xr nextcloud-nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/11/01 04:18:37 [error] 34#34: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.16.90.192, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "dragoncoin.com"
... Truncated for brevity ...
2021/11/01 04:34:20 [error] 34#34: *155 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.100.95, server: , request: "GET /apps/photos/service-worker.js HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "dragoncoin.com", referrer: "https://dragoncoin.com/apps/photos/service-worker.js"
172.16.100.95 - - [01/Nov/2021:04:34:20 +0000] "GET /apps/photos/service-worker.js HTTP/1.1" 502 559 "https://dragoncoin.com/apps/photos/service-worker.js" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" "172.16.100.164"
admin@controller:~$ k logs nextcloud-67855fc94c-lc2xr nextcloud
Initializing nextcloud 22.1.1.2 ...
Upgrading nextcloud from 22.1.0.1 ...
Initializing finished
Nextcloud or one of the apps require upgrade - only a limited number of commands are available
You may use your browser or the occ upgrade command to do the upgrade
Setting log level to debug
Turned on maintenance mode
Updating database schema
Updated database
Updating <workflowengine> ...
Updated <workflowengine> to 2.3.1
Checking for update of app accessibility in appstore
Checked for update of app "accessibility" in App Store
Checking for update of app activity in appstore
Checked for update of app "activity" in App Store
Checking for update of app audioplayer in appstore
Update app audioplayer from App Store
Checked for update of app "audioplayer" in App Store
Checking for update of app breezedark in appstore
Update app breezedark from App Store
Checked for update of app "breezedark" in App Store
Checking for update of app bruteforcesettings in appstore
Checked for update of app "bruteforcesettings" in App Store
Checking for update of app camerarawpreviews in appstore
Checked for update of app "camerarawpreviews" in App Store
Checking for update of app cloud_federation_api in appstore
Checked for update of app "cloud_federation_api" in App Store
Checking for update of app cms_pico in appstore
Update app cms_pico from App Store
Repair warning: Replacing Pico CMS config file "config.yml.template"
Repair warning: Replacing Pico CMS system template "empty"
Repair warning: Replacing Pico CMS system template "sample_pico"
Repair warning: Replacing Pico CMS system theme "default"
Repair warning: Replacing Pico CMS system plugin "PicoDeprecated"
Checked for update of app "cms_pico" in App Store
Checking for update of app contactsinteraction in appstore
Checked for update of app "contactsinteraction" in App Store
Checking for update of app dav in appstore
Checked for update of app "dav" in App Store
Checking for update of app documentserver_community in appstore
Checked for update of app "documentserver_community" in App Store
Checking for update of app drawio in appstore
Checked for update of app "drawio" in App Store
Checking for update of app external in appstore
Checked for update of app "external" in App Store
Checking for update of app federatedfilesharing in appstore
Checked for update of app "federatedfilesharing" in App Store
Checking for update of app files in appstore
Checked for update of app "files" in App Store
Checking for update of app files_antivirus in appstore
Update app files_antivirus from App Store
Checked for update of app "files_antivirus" in App Store
Checking for update of app files_markdown in appstore
Checked for update of app "files_markdown" in App Store
Checking for update of app files_mindmap in appstore
Checked for update of app "files_mindmap" in App Store
Checking for update of app files_pdfviewer in appstore
Checked for update of app "files_pdfviewer" in App Store
Checking for update of app files_rightclick in appstore
Checked for update of app "files_rightclick" in App Store
Checking for update of app files_sharing in appstore
Checked for update of app "files_sharing" in App Store
Checking for update of app files_trashbin in appstore
Checked for update of app "files_trashbin" in App Store
Checking for update of app files_versions in appstore
Checked for update of app "files_versions" in App Store
Checking for update of app files_videoplayer in appstore
Checked for update of app "files_videoplayer" in App Store
Checking for update of app forms in appstore
Checked for update of app "forms" in App Store
Checking for update of app logreader in appstore
Checked for update of app "logreader" in App Store
Checking for update of app lookup_server_connector in appstore
Checked for update of app "lookup_server_connector" in App Store
Checking for update of app maps in appstore
Checked for update of app "maps" in App Store
Checking for update of app music in appstore
Update app music from App Store
Checked for update of app "music" in App Store
Checking for update of app news in appstore
Update app news from App Store
Checked for update of app "news" in App Store
Checking for update of app notifications in appstore
Checked for update of app "notifications" in App Store
Checking for update of app oauth2 in appstore
Checked for update of app "oauth2" in App Store
Checking for update of app password_policy in appstore
Checked for update of app "password_policy" in App Store
Checking for update of app photos in appstore
Checked for update of app "photos" in App Store
Checking for update of app privacy in appstore
Checked for update of app "privacy" in App Store
Checking for update of app provisioning_api in appstore
Checked for update of app "provisioning_api" in App Store
Checking for update of app quicknotes in appstore
Checked for update of app "quicknotes" in App Store
Checking for update of app recommendations in appstore
Checked for update of app "recommendations" in App Store
Checking for update of app registration in appstore
Checked for update of app "registration" in App Store
Checking for update of app richdocuments in appstore
Update app richdocuments from App Store
Checked for update of app "richdocuments" in App Store
Checking for update of app serverinfo in appstore
Checked for update of app "serverinfo" in App Store
Checking for update of app settings in appstore
Checked for update of app "settings" in App Store
Checking for update of app sharebymail in appstore
Checked for update of app "sharebymail" in App Store
Checking for update of app spreed in appstore
Update app spreed from App Store
Checked for update of app "spreed" in App Store
Checking for update of app support in appstore
Checked for update of app "support" in App Store
Checking for update of app survey_client in appstore
Checked for update of app "survey_client" in App Store
Checking for update of app systemtags in appstore
Checked for update of app "systemtags" in App Store
Checking for update of app tasks in appstore
Checked for update of app "tasks" in App Store
Checking for update of app text in appstore
Checked for update of app "text" in App Store
Checking for update of app theming in appstore
Checked for update of app "theming" in App Store
Checking for update of app twofactor_backupcodes in appstore
Checked for update of app "twofactor_backupcodes" in App Store
Checking for update of app updatenotification in appstore
Checked for update of app "updatenotification" in App Store
Checking for update of app user_status in appstore
Checked for update of app "user_status" in App Store
Checking for update of app video_converter in appstore
Update app video_converter from App Store
Checked for update of app "video_converter" in App Store
Checking for update of app viewer in appstore
Checked for update of app "viewer" in App Store
Checking for update of app weather_status in appstore
Checked for update of app "weather_status" in App Store
Checking for update of app workflowengine in appstore
Checked for update of app "workflowengine" in App Store
Starting code integrity check...

Reviewing Documentation:
According to the kubernetes ingress requirements (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc#prerequisites) the application must return a 200 status code at ‘/’. It’s a known behavior that when an application is not in a ‘ready’ state, it would return a 302 (redirect to login). If health checks are configured to run, failing results would cause the ingress resource returns 502. Even if health checks are skipped, the container would that are still in a ‘starting code integrity check…’ state would still relay non-200 statuses, which leads to Ingress to return 502 to the users.

How to Cramp For a Test

From my experience, the trick to memorizing 10,000+ questions and answers at the 90% accuracy level was to read and perform hands-on practice on all questions the first time (took about 10-hour per day x 6 days x 19 weeks). Then, at the second time, I marked any questions that were recalled inaccurately. Third time, I only read the marked questions and reiterated until all the last batch could be recalled at 100% accuracy. On the test day, I would still miss some items and be surprised by a few more. I might have to retake the test more than once to pass at the scores of 85%+.
 
The trick is to trigger my brain to pay attention to only mistakes, not everything. It’s easy to do because we humans are natural at learning from mistakes. This is how proper planning can beat a genius.

Domain Name Records Overview: A-record, MX, DKIM, SPF, SRV

A RECORD (A-host):

– What: address record (A-record) specifies the IP address(es) of a given domain. In the case of IPv6, this is called an AAAA record.
– Why: name to address translation is necessary for users to type in a name to get to an IP address of the web server
– Who: domain admin sets these up, and these affect all users of the domain
– How:
kimconnect.com record type: value: TTL
@ A x.x.x.x 14400

MX (Mail Exchange):

– What: mail exchange (MX) records direct emails toward designated mail servers. These are like CNAME records for name servers with the difference in their marking as designated for mailings
– Why: these entries control how email messages should be routed in accordance with the Simple Mail Transfer Protocol (SMTP)
– Who: domain admins can edit these records
– How: below is an example of setting mail records of a domain toward 2 mail servers with different priorities
kimconnect.com record type: priority: value: TTL
@ MX 10 mail1.kimconnect.com 45000
@ MX 20 mail2.kimconnect.com 45000

SPF (Sender Policy Framework):

– What: Sender Policy Framework (spf) is a type of TXT record in your DNS zone
– Why: SPF records help identify which mail servers are permitted to send email on behalf of your domain. These records prevent spammers from sending emails with a forged ‘From’ addresses of your domain
– Who: domain admins can make these changes. Users benefit from not receiving forged emails, and would correctly receive emails being sent from company servers.
– How (examples):
a. Simple:
- v=spf1 include:_spf.google.com ~all (Google)
- v=spf1 include:spf.protection.outlook.com ~all (Microsoft)
b. Complex:
- v=spf1 ip4:IP.ADDRESS.HERE/NETMASK include:_spf.google.com ~all (Google)
- v=spf1 ip4:IP.ADDRESS.HERE/NETMASK include:spf.protection.outlook.com ~all (Microsoft)
- v=spf1 ip4:IP.ADDRESS.HERE/NETMASK include:spf.protection.outlook.com include:_spf.google.com include:aem.autotask.net include:customers.clickdimensions.com ~all (Google, Microsoft, ClickDimensions, Autotask)

Explanations

  • v=spf1 : marks spf protocol version (version 1 is the most commonly used protocol by email servers as of this writing)
  • ip4 or ip6 : specifies the IP address versioning. A single IP or a summarized subnet/supernet are acceptable
  • mx : allows the MX servers to send mail
  • include : allows a third-party to send emails on your domain’s behalf
  • a : allows the current IP to send mail
  • +all : allows any IP to send emails on this domain’s behalf
  • -all : allows no other IP’s to send emails on the domain’s behalf
  • ~all : allows all IP’s to send emails on your domain’s behalf, while messages would be marked
DKIM  (DomainKeys Identified Mail):

– What: it’s an email record associated with certain domains. These are composed of a selector and a public key. There is a private key that is installed on the email server, and is its alternate hashes are attached to email headers. Only the public key is added as the domain’s DNS record. The receiving email server performs keys matching to determine if the email is legitimate (not spam)
– Why: to prevent email spoofing
– Who: domain admins make these changes
– How: (source: Google)

  1.  Generate the domain key for your domain (For Google: https:// support.google.com/a/answer/174126?hl=en&ref_topic=2752442)
  2.  Add the public key to your domain’s DNS records
    • Example: kimconnect.com. 300 IN TXT "v=DKIM1; k=rsa; p=SOMEHASH" "MOREHASH"
  3.  Add DKIM onto email server(s) to start adding a DKIM signature to all outgoing messages
    • Example: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
      d=kimconnect.com; s=google;
      h=sender:mime-version:from:to:date:subject:message-id
      :x-original-sender:x-original-authentication-results:precedence
      :mailing-list:list-id:list-post:list-help:list-archive
      :list-unsubscribe;
      bh=SOMELONGHASH
SRV (Service Records):

– What: service (SRV) records specify hosts and ports for services such as VoIP, instant messaging, domain proof of ownership, etc.
– Why: these records include IP address and port information that other type of DNS records do not have the option. Some Internet protocols require the use of SRV records in order to function.
– Who: domain admins manage these at DNS zone control panels
– How: SRV records must point to an A record (in IPv4) or an AAAA record (in IPv6), not CNAME. Below are some examples
_sip._tls.@ 100 1 443 sipdir.online.lync.com. (Microsoft Lync)
_sipfederationtls._tcp.@ 100 1 5061 sipfed.online.lync.com. (Microsoft Lync)
_xmpp._tcp.kimconnect.com. 86400 IN SRV 10 5 5223 xmpp.kimconnect.com. (xmpp server)

Puppet Client Server Lab Setup

Server

# Setup client machine name
sudo vim /etc/hosts
## Insert this line ##
xx.xx.xx.xx puppet-client.local puppet-client # this will enable client to know client with DNS
xx.xx.xx.xx puppet-server.local puppet-server 

# Install puppet
curl -O https://apt.puppetlabs.com/puppetlabs-release-pc1-xenial.deb 
dpkg -i puppetlabs-release-pc1-xenial.deb

# Install puppet agent
sudo apt update
sudo apt install puppet-server -y 
systemctl enable puppetserver
systemctl start puppetserver

# Configure memory allocation
vim /etc/default/pupperserver
## change this line to fit your memory allowance ##
JAVA_ARGS="-Xms2g -Xmx2g -XX:MaxPermSize=256m"

# Config firewall
sudo ufw allow 8140

Client

# Setup client machine name
sudo vim /etc/hosts
## Insert this line ##
xx.xx.xx.xx puppet-client.local puppet-client
xx.xx.xx.xx puppet-server.local puppet-server # this will enable client to reach server

# Install puppet
wget https://apt.puppetlabs.com/puppetlabs-release-pc1-xenial.deb 
dpkg -i puppetlabs-release-pc1-xenial.deb

# Install puppet agent
sudo apt update
sudo apt install puppet-agent -y 
systemctl enable puppet
systemctl start puppet

Connecting Client To Server

# create cert signing request while login to client
/opt/puppetlabs/bin/puppet agent -t --server=puppet-master.local

# sign the cert signing request while login to server
/opt/puppetlabs/bin/puppet list --all # Check for existing certs and requests
/opt/puppetlabs/bin/puppet cert sign puppet-client.local # where puppet-client.local is the requesting node

# Test from client
/opt/puppetlabs/puppet/bin/puppet agent -t --server=puppet-master.local

Example of installing modules: Python

# While login to puppet-master

# set current directory as the modules folder
cd /etc/puppetlabs/code/environments/production/modules/

# search for a module from Puppet Forge
$ /opt/puppetlabs/bin/puppet module search python

# install the module we've selected
$ sudo /opt/puppetlabs/bin/puppet module install python

# Install the module to make it available to the manifest inclusions
/opt/puppetlabs/bin/puppet module install puppet-labs-python --version x.xx.x

# verify the module is installed
$ sudo /opt/puppetlabs/bin/puppet module list

# Pushing puppet image out to a client
vim /etc/puppetlabs/code/environments/production/manifests/site.pp
## Insert this content ##
node 'puppet-client.local' {
  include python
  include python::virtualenv
}

# pull the config while login to client
/opt/puppetlabs/puppet/bin/puppet agent -t --server=puppet-master.local

PowerShell: Set Windows Scheduled Task to Send a Pop-up Message

# Set variables
$taskName='Bi-weekly Meeting Reminder'
$time='11:50am'
$daily=New-ScheduledTaskTrigger -Daily -At $time
$everyOtherDay=New-ScheduledTaskTrigger -Daily -DaysInterval 2 -At $time
$biWeekly=New-ScheduledTaskTrigger -Weekly -WeeksInterval 2 -DaysOfWeek Wednesday -At $time
$atLogon=New-ScheduledTaskTrigger -AtLogon

# Command to send a message to user by name
$username='kim'
$command="Send-RDUserMessage -HostServer $env:computername -UnifiedSessionID (query session $username /SERVER:$env:computername|select -skip 1|%{`$_.Split(' ',[System.StringSplitOptions]::RemoveEmptyEntries)})[2] -MessageTitle 'Scheduled Task' -MessageBody '$taskName'"

$principal = New-ScheduledTaskPrincipal -UserID "NT AUTHORITY\SYSTEM" -LogonType ServiceAccount -RunLevel Highest
$settings=New-ScheduledTaskSettingsSet -MultipleInstances IgnoreNew -ExecutionTimeLimit 0
#$action=New-ScheduledTaskAction -Execute "Powershell.exe" -Argument "-ExecutionPolicy Bypass $scriptFile"
#$action=New-ScheduledTaskAction -Execute 'PowerShell.exe' -Argument "Add-Type -AssemblyName PresentationFramework;[System.Windows.MessageBox]::Show('$taskName')"
$action=New-ScheduledTaskAction -Execute 'PowerShell.exe' -Argument $command

# Unregister the Scheduled task if it already exists
Get-ScheduledTask $taskName -ErrorAction SilentlyContinue | Unregister-ScheduledTask -Confirm:$false;

# Create new scheduled task
Register-ScheduledTask -Action $action -Trigger $biWeekly -TaskName $taskName -Settings $settings -Principal $principal

Toner Cartridge CF283A vs CF283X

These toners will work with HP Pro MFP M127fw M127fn M125nw M201dw M201n M225dn M225dw M125a

The HP 83X High Yield Black Original LaserJet Toner Cartridge is able to print around 700 more pages than the HP 83A Black Original LaserJet Toner Cartridge at 5% density.

Both versions have similar dimensions and identical operating temperature ranges, operating humidity ranges, storage temperature ranges as well as storage humidity ranges.

Although not obvious, the ‘genuine’ A version contains slightly more recycled materials when compared to the X version of the HP83.

How To Install Graylog in a Kubernetes Cluster Using Helm Charts

The following narrative is based on the assumption that a Kubernetes (current stable version 20.10) has been setup using MetalLB Ingress controller. This should also work with Traefik or other load balancers.

# Create a separate namespace for this project
kubectl create namespace graylog

# Change into the graylog namespace
kubectl config set-context --current --namespace=graylog
kubectl config view --minify | grep namespace: # Validate it

# Optional: delete previous test instances of graylog that have been deployed via Helm
helm delete "graylog" --namespace graylog
kubectl delete pvc --namespace graylog --all

# How to switch execution context back to the 'default' namespace
kubectl config set-context --current --namespace=default

# Optional: installing mongdb prior to Graylog
helm install "mongodb" bitnami/mongodb --namespace "graylog" \
  --set persistence.size=100Gi
# Sample output:
NAME: mongodb
LAST DEPLOYED: Thu Aug 29 00:07:36 2021
NAMESPACE: graylog
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
MongoDB&reg; can be accessed on the following DNS name(s) and ports from within your cluster:
    mongodb.graylog.svc.cluster.local
To get the root password run:
    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace graylog mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
To connect to your database, create a MongoDB&reg; client container:
    kubectl run --namespace graylog mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.8-debian-10-r9 --command -- bash
Then, run the following command:
    mongo admin --host "mongodb" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
To connect to your database from outside the cluster execute the following commands:
    kubectl port-forward --namespace graylog svc/mongodb 27017:27017 &
    mongo --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD

# REQUIRED: Pre-install ElasticSearch version 7.10 as highest being supported by Graylog 4.1.3
# Source: https://artifacthub.io/packages/helm/elastic/elasticsearch/7.10.2
helm repo add elastic https://helm.elastic.co
helm repo update
helm install elasticsearch elastic/elasticsearch --namespace "graylog" \
  --set imageTag=7.10.2 \
  --set data.persistence.size=100Gi
# Sample output:
NAME: elasticsearch
LAST DEPLOYED: Sun Aug 29 04:35:30 2021
NAMESPACE: graylog
STATUS: deployed
REVISION: 1
NOTES:
1. Watch all cluster members come up.
  $ kubectl get pods --namespace=graylog -l app=elasticsearch-master -w
2. Test cluster health using Helm test.
  $ helm test elasticsearch

# Installation of Graylog with mongodb bundled, while integrating with a pre-deployed elasticSearch instance
#
# This install command assumes that the protocol preference for transporting logs is TCP
# Also, the current helm chart does not allow mixing TCP with UDP; therefore, this approach is conveniently
# matching business requirements where a reliable transmission TCP protocol is necessary to record security data.
helm install graylog kongz/graylog --namespace "graylog" \
  --set graylog.image.repository="graylog/graylog:4.1.3-1" \
  --set graylog.persistence.size=200Gi \
  --set graylog.service.type=LoadBalancer \
  --set graylog.service.port=80 \
  --set graylog.service.loadBalancerIP=10.10.100.88 \
  --set graylog.service.externalTrafficPolicy=Local \
  --set graylog.service.ports[0].name=gelf \
  --set graylog.service.ports[0].port=12201 \
  --set graylog.service.ports[1].name=syslog \
  --set graylog.service.ports[1].port=514 \
  --set graylog.rootPassword="SOMEPASSWORD" \
  --set tags.install-elasticsearch=false \
  --set graylog.elasticsearch.version=7 \
  --set graylog.elasticsearch.hosts=http://elasticsearch-master.graylog.svc.cluster.local:9200

# Optional: add these lines if the mongodb component has been installed separately
  --set tags.install-mongodb=false \
  --set graylog.mongodb.uri=mongodb://mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local:27017/graylog?replicaSet=rs0 \

# Moreover, the graylog chart version 1.8.4 doesn't seem to set externalTrafficPolicy as expected.
# Set externalTrafficPolicy = local to preserve source client IPs
kubectl patch svc graylog-web -n graylog -p '{"spec":{"externalTrafficPolicy":"Local"}}'

# Sometimes, the static EXTERNAL-IP would be assigned to graylog-master, where graylog-web EXTERNAL-IP would
# remain in the status of <pending> indefinitely.
# Workaround: set services to share a single external IP
kubectl patch svc graylog-web -p '{"metadata":{"annotations":{"metallb.universe.tf/allow-shared-ip":"graylog"}}}'
kubectl patch svc graylog-master -p '{"metadata":{"annotations":{"metallb.universe.tf/allow-shared-ip":"graylog"}}}'
kubectl patch svc graylog-master -n graylog -p '{"spec": {"type": "LoadBalancer", "externalIPs":["10.10.100.88"]}}'
kubectl patch svc graylog-web -n graylog -p '{"spec": {"type": "LoadBalancer", "externalIPs":["10.10.100.88"]}}'

# Test sending logs to server via TCP
graylog-server=graylog.kimconnect.com
echo -e '{"version": "1.1","host":"kimconnect.com","short_message":"Short message","full_message":"This is a\n\nlong message","level":9000,"_user_id":9000,"_ip_address":"1.1.1.1","_location":"LAX"}\0' | nc -w 1 $graylog-server 514

# Test via UDP
graylog-server=graylog.kimconnect.com
echo -e '{"version": "1.1","host":"kimconnect.com","short_message":"Short message","full_message":"This is a\n\nlong message","level":9000,"_user_id":9000,"_ip_address":"1.1.1.1","_location":"LAX"}\0' | nc -u -w 1 $graylog-server 514

# Optional: graylog Ingress
cat > graylog-ingress.yaml <<EOF
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: graylog-ingress
  namespace: graylog
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # set these for SSL
    # ingress.kubernetes.io/rewrite-target: /
    # acme http01
    # acme.cert-manager.io/http01-edit-in-place: "true"
    # acme.cert-manager.io/http01-ingress-class: "true"
    # kubernetes.io/tls-acme: "true"  
spec:
  rules:
  - host: graylog.kimconnect.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: graylog-web
            port:
              number: 80
      - path: /
        pathType: Prefix
        backend:
          service:
            name: graylog-web
            port:
              number: 12201
      - path: /
        pathType: Prefix
        backend:
          service:
            name: graylog-web
            port:
              number: 514              
EOF
kubectl apply -f graylog-ingress.yaml

Troubleshooting Notes:

# Sample commands to patch graylog service components
kubectl patch svc graylog-web -p '{"spec":{"type":"LoadBalancer"}}' # Convert ClusterIP to LoadBalancer to gain ingress
kubectl patch svc graylog-web -p '{"spec":{"externalIPs":["10.10.100.88"]}}' # Add externalIPs
kubectl patch svc graylog-master -n graylog -p '{"spec":{"loadBalancerIP":""}}' # Remove loadBalancer IPs
kubectl patch svc graylog-master -n graylog -p '{"status":{"loadBalancer":{"ingress":[]}}}' # Purge ingress IPs
kubectl patch svc graylog-web -n graylog -p '{"status":{"loadBalancer":{"ingress":[{"ip":"10.10.100.88"}]}}}'
kubectl patch svc graylog-web -n graylog -p '{"status":{"loadBalancer":{"ingress":[]}}}'

# Alternative solution: mixing UDP with TCP
# The current chart version only allows this when service Type = ClusterIP (default)
helm upgrade graylog kongz/graylog --namespace "graylog" \
  --set graylog.image.repository="graylog/graylog:4.1.3-1" \
  --set graylog.persistence.size=200Gi \
  --set graylog.service.externalTrafficPolicy=Local \
  --set graylog.service.port=80 \
  --set graylog.service.ports[0].name=gelf \
  --set graylog.service.ports[0].port=12201 \
  --set graylog.service.ports[0].protocol=UDP \
  --set graylog.service.ports[1].name=syslog \
  --set graylog.service.ports[1].port=514 \
  --set graylog.service.ports[1].protocol=UDP \
  --set graylog.rootPassword="SOMEPASSWORD" \
  --set tags.install-elasticsearch=false \
  --set graylog.elasticsearch.version=7 \
  --set graylog.elasticsearch.hosts=http://elasticsearch-master.graylog.svc.cluster.local:9200

# Error message occurs when combing TCP with UDP; hence, a ClusterIP must be specified
Error: UPGRADE FAILED: cannot patch "graylog-web" with kind Service: Service "graylog-web" is invalid: spec.ports: Invalid value: []core.ServicePort{core.ServicePort{Name:"graylog", Protocol:"TCP", AppProtocol:(*string)(nil), Port:80, TargetPort:intstr.IntOrString{Type:0, IntVal:9000, StrVal:""}, NodePort:32518}, core.ServicePort{Name:"gelf", Protocol:"UDP", AppProtocol:(*string)(nil), Port:12201, TargetPort:intstr.IntOrString{Type:0, IntVal:12201, StrVal:""}, NodePort:0}, core.ServicePort{Name:"gelf2", Protocol:"TCP", AppProtocol:(*string)(nil), Port:12222, TargetPort:intstr.IntOrString{Type:0, IntVal:12222, StrVal:""}, NodePort:31523}, core.ServicePort{Name:"syslog", Protocol:"TCP", AppProtocol:(*string)(nil), Port:514, TargetPort:intstr.IntOrString{Type:0, IntVal:514, StrVal:""}, NodePort:31626}}: may not contain more than 1 protocol when type is 'LoadBalancer'

# Set array type value instead of string
Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Service.spec.externalIPs): invalid type for io.k8s.api.core.v1.ServiceSpec.externalIPs: got "string", expected "array"
# Solution:
--set "array={a,b,c}" OR --set service[0].port=80

# Graylog would not start and this was the error:
com.github.joschi.jadconfig.ValidationException: Parent directory /usr/share/graylog/data/journal for Node ID file at /usr/share/graylog/data/journal/node-id is not writable

# Workaround
graylogData=/mnt/k8s/graylog-journal-graylog-0-pvc-04dd9c7f-a771-4041-b549-5b4664de7249/
chown -fR 1100:1100 $graylogData

NAME: graylog
LAST DEPLOYED: Thu Aug 29 03:26:00 2021
NAMESPACE: graylog
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To connect to your Graylog server:
1. Get the application URL by running these commands:
  Graylog Web Interface uses JavaScript to get detail of each node. The client JavaScript cannot communicate to node when service type is `ClusterIP`.
  If you want to access Graylog Web Interface, you need to enable Ingress.
    NOTE: Port Forward does not work with web interface.
2. The Graylog root users
  echo "User: admin"
  echo "Password: $(kubectl get secret --namespace graylog graylog -o "jsonpath={.data['graylog-password-secret']}" | base64 --decode)"
To send logs to graylog:
  NOTE: If `graylog.input` is empty, you cannot send logs from other services. Please make sure the value is not empty.
        See https://github.com/KongZ/charts/tree/main/charts/graylog#input for detail

k describe pod graylog-0
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  11m                   default-scheduler  0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  11m                   default-scheduler  0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         11m                   default-scheduler  Successfully assigned graylog/graylog-0 to linux03
  Normal   Pulled            11m                   kubelet            Container image "alpine" already present on machine
  Normal   Created           11m                   kubelet            Created container setup
  Normal   Started           10m                   kubelet            Started container setup
  Normal   Started           4m7s (x5 over 10m)    kubelet            Started container graylog-server
  Warning  Unhealthy         3m4s (x4 over 9m14s)  kubelet            Readiness probe failed: Get "http://172.16.90.197:9000/api/system/lbstatus": dial tcp 172.16.90.197:9000: connect: connection refused
  Normal   Pulled            2m29s (x6 over 10m)   kubelet            Container image "graylog/graylog:4.1.3-1" already present on machine
  Normal   Created           2m19s (x6 over 10m)   kubelet            Created container graylog-server
  Warning  BackOff           83s (x3 over 2m54s)   kubelet            Back-off restarting failed container

Readiness probe failed: Get http://api/system/lbstatus: dial tcp 172.16.90.197:9000: connect: connection refused

# Set external IP
# This only works on LoadBalancer, not ClusterIP
# kubectl patch svc graylog-web -p '{"spec":{"externalIPs":["10.10.100.88"]}}'
# kubectl patch svc graylog-master -p '{"spec":{"externalIPs":[]}}'

kubectl patch service graylog-web --type='json' -p='[{"op": "add", "path": "/metadata/annotations/kubernetes.io~1ingress.class", "value":"nginx"}]'

# Set annotation to allow shared IPs between 2 different services
kubectl annotate service graylog-web metallb.universe.tf/allow-shared-ip=graylog
kubectl annotate service graylog-master metallb.universe.tf/allow-shared-ip=graylog

metadata:
  name: $serviceName-tcp
  annotations:
    metallb.universe.tf/address-pool: default
    metallb.universe.tf/allow-shared-ip: psk

# Ingress
appName=graylog
domain=graylog.kimconnect.com
deploymentName=graylog-web
containerPort=9000
cat <<EOF> $appName-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: $appName-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # ingress.kubernetes.io/rewrite-target: /
    # acme http01
    # acme.cert-manager.io/http01-edit-in-place: "true"
    # acme.cert-manager.io/http01-ingress-class: "true"
    # kubernetes.io/tls-acme: "true"
spec:
  rules:
  - host: $domain
    http:
      paths:
      - backend:
          service:
            name: $deploymentName
            port:
              number: 9000
        path: /
        pathType: Prefix
EOF
kubectl apply -f $appName-ingress.yaml

# delete pvc's
namespace=graylog
kubectl delete pvc data-graylog-elasticsearch-data-0 -n $namespace
kubectl delete pvc data-graylog-elasticsearch-master-0 -n $namespace
kubectl delete pvc datadir-graylog-mongodb-0 -n $namespace
kubectl delete pvc journal-graylog-0 -n $namespace

# delete all pvc's in namespace the easier way
namespace=graylog
kubectl get pvc -n $namespace | awk '$1 {print$1}' | while read vol; do kubectl delete pvc/${vol} -n $namespace; done

2021-08-20 20:19:41,048 INFO    [cluster] - Exception in monitor thread while connecting to server mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local:27017 - {}
com.mongodb.MongoSocketException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local
        at com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:211) ~[graylog.jar:?]
        at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:75) ~[graylog.jar:?]
        at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[graylog.jar:?]
        at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[graylog.jar:?]
        at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) [graylog.jar:?]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_302]
Caused by: java.net.UnknownHostException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local
        at java.net.InetAddress.getAllByName0(InetAddress.java:1281) ~[?:1.8.0_302]
        at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[?:1.8.0_302]
        at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[?:1.8.0_302]
        at com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:203) ~[graylog.jar:?]
        ... 5 more

2021-08-20 20:19:42,981 INFO    [cluster] - No server chosen by com.mongodb.client.internal.MongoClientDelegate$1@69419d59 from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local}, caused by {java.net.UnknownHostException: mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local}}]}. Waiting for 30000 ms before timing out - {}

# Alternative version - that doesn't work
# helm repo add groundhog2k https://groundhog2k.github.io/helm-charts/
# helm install graylog groundhog2k/graylog --namespace "graylog" \
#   --set image.tag=4.1.3-1 \
#   --set settings.http.publishUri='http://127.0.0.1:9000/' \
#   --set service.type=LoadBalancer \
#   --set service.loadBalancerIP=192.168.100.88 \
#   --set elasticsearch.enabled=true \
#   --set mongodb.enabled=true

# helm upgrade graylog groundhog2k/graylog --namespace "graylog" \
#   --set image.tag=4.1.3-1 \
#   --set settings.http.publishUri=http://localhost:9000/ \
#   --set service.externalTrafficPolicy=Local \
#   --set service.type=LoadBalancer \
#   --set service.loadBalancerIP=192.168.100.88 \
#   --set elasticsearch.enabled=true \
#   --set mongodb.enabled=true \
#   --set storage.className=nfs-client \
#   --set storage.requestedSize=200Gi

# kim@linux01:~$ k logs graylog-0
# 2021-08-29 03:47:09,345 ERROR: org.graylog2.bootstrap.CmdLineTool - Invalid configuration
# com.github.joschi.jadconfig.ValidationException: Couldn't run validator method
#         at com.github.joschi.jadconfig.JadConfig.invokeValidatorMethods(JadConfig.java:227) ~[graylog.jar:?]
#         at com.github.joschi.jadconfig.JadConfig.process(JadConfig.java:100) ~[graylog.jar:?]
#         at org.graylog2.bootstrap.CmdLineTool.processConfiguration(CmdLineTool.java:420) [graylog.jar:?]
#         at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:236) [graylog.jar:?]
#         at org.graylog2.bootstrap.Main.main(Main.java:45) [graylog.jar:?]
# Caused by: java.lang.reflect.InvocationTargetException
#         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_302]
#         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_302]
#         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_302]
#         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_302]
#         at com.github.joschi.jadconfig.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:53) ~[graylog.jar:?]
#         at com.github.joschi.jadconfig.JadConfig.invokeValidatorMethods(JadConfig.java:221) ~[graylog.jar:?]
#         ... 4 more
# Caused by: java.lang.IllegalArgumentException: URLDecoder: Illegal hex characters in escape (%) pattern - For input string: "!s"
#         at java.net.URLDecoder.decode(URLDecoder.java:194) ~[?:1.8.0_302]
#         at com.mongodb.ConnectionString.urldecode(ConnectionString.java:1035) ~[graylog.jar:?]
#         at com.mongodb.ConnectionString.urldecode(ConnectionString.java:1030) ~[graylog.jar:?]
#         at com.mongodb.ConnectionString.<init>(ConnectionString.java:336) ~[graylog.jar:?]
#         at com.mongodb.MongoClientURI.<init>(MongoClientURI.java:256) ~[graylog.jar:?]
#         at org.graylog2.configuration.MongoDbConfiguration.getMongoClientURI(MongoDbConfiguration.java:59) ~[graylog.jar:?]
#         at org.graylog2.configuration.MongoDbConfiguration.validate(MongoDbConfiguration.java:64) ~[graylog.jar:?]
#         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_302]
#         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_302]
#         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_302]
#         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_302]
#         at com.github.joschi.jadconfig.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:53) ~[graylog.jar:?]
#         at com.github.joschi.jadconfig.JadConfig.invokeValidatorMethods(JadConfig.java:221) ~[graylog.jar:?]

How to Setup Dynamic DNS with Google Domains & Ubiquity EdgeRouter

Step 1: Set up Dynamic DNS

– Access Google Domains: https://domains.google.com/registrar/
– Click on the Manage button, next to your domain
– Click on DNS
– Scroll toward the bottom to click on Advanced Settings
– Click on Manage dynamic DNS
– Leave the hostname field blank, click on Save
– If this domain already has a record, click on Replace to proceed or Cancel to input a different sub-domain
– Click on the drop-down menu next to ‘Your domain has Dynamic DNS setup’
– Select View credentials to trigger a pop-up window
– Click on View to see the username and password generated for this domain
– Copy and paste the information into a notepad to be used in ‘Step 2’
– Select Close

Step 2: Configure EdgeRouter with Dynamic DNS

– Access the router: https://ip.address.of.router/#Services/DNS
– In section Dynamic DNS, click the Add Dynamic DNS Interface button
– Set these values:
  – Interface: eth0 (or WAN interface)
  – Web: <leave blank>
  – Web-skip: <leave blank>
  – Service: dyndns
  – Hostname: kimconnect.com (or the hostname that has been setup in step 1)
  – Login: {username copied in step 1}
  – Password: {password copied in step 1}
  – Protocol: dyndns2
  – Server: domains.google.com
– Click on Apply
– Click on Force Update to expect this message ‘The configuration has been applied successfully’

Linux: Creating Soft Links as Directories

Optional test: create a soft-link for directory as hard-links are not allowed

source=/nfs-share/linux03/docker/containers
destinationdirectory=/var/lib/docker
sudo mkdir -p $source
sudo ln -sfn $source $destinationdirectory

# Expected result:
# The -sfn will force overwrite if link already exists to avoid this error
# ln: failed to create symbolic link '/var/lib/docker/containers': File exists
# -n, --no-deference: treat LINK_NAME as a normal file if it is a symbolic link to a directory

This is fail-safe sequence to create a symlink of an existing directory toward a destination. In this example, the directory is being held by a process named docker. Thus, it’s necessary to stop that process > delete its directory > recreate the directory as a link toward a desired destination

# The below sequence would pre-empt this error:
# ln: /var/lib/docker/containers: cannot overwrite directory
sudo su
systemctl stop docker
directoryname=containers
source=/nfs-share/linux03/docker/$directoryname
destinationdirectory=/var/lib/docker
sudo mkdir -p $source
sudo rmdir $destinationdirectory/directoryname
sudo ln -sfn $source $destinationdirectory
systemctl start docker

Optional: how to remove a symlink

directoryname=containers
destinationdirectory=/var/lib/docker
rm $destinationdirectory/$directoryname