An Indication That Microsoft Office 365 Email Connector Doesn’t Like Large IP Blocks

As we’ve advised our clients to configure their O365 to ‘whitelist’ or allow email relays from our email servers, there appears to be a problem when the clients attempt to configure email connectors with a large CIDR block such as /20.


Here’s another screenshot

Domain Name Records Overview: A-record, MX, DKIM, SPF, SRV

A RECORD (A-host):

– What: address record (A-record) specifies the IP address(es) of a given domain. In the case of IPv6, this is called an AAAA record.
– Why: name to address translation is necessary for users to type in a name to get to an IP address of the web server
– Who: domain admin sets these up, and these affect all users of the domain
– How: record type: value: TTL
@ A x.x.x.x 14400

MX (Mail Exchange):

– What: mail exchange (MX) records direct emails toward designated mail servers. These are like CNAME records for name servers with the difference in their marking as designated for mailings
– Why: these entries control how email messages should be routed in accordance with the Simple Mail Transfer Protocol (SMTP)
– Who: domain admins can edit these records
– How: below is an example of setting mail records of a domain toward 2 mail servers with different priorities record type: priority: value: TTL
@ MX 10 45000
@ MX 20 45000

SPF (Sender Policy Framework):

– What: Sender Policy Framework (spf) is a type of TXT record in your DNS zone
– Why: SPF records help identify which mail servers are permitted to send email on behalf of your domain. These records prevent spammers from sending emails with a forged ‘From’ addresses of your domain
– Who: domain admins can make these changes. Users benefit from not receiving forged emails, and would correctly receive emails being sent from company servers.
– How (examples):
a. Simple:
- v=spf1 ~all (Google)
- v=spf1 ~all (Microsoft)
b. Complex:
- v=spf1 ip4:IP.ADDRESS.HERE/NETMASK ~all (Google)
- v=spf1 ip4:IP.ADDRESS.HERE/NETMASK ~all (Microsoft)
- v=spf1 ip4:IP.ADDRESS.HERE/NETMASK ~all (Google, Microsoft, ClickDimensions, Autotask)


  • v=spf1 : marks spf protocol version (version 1 is the most commonly used protocol by email servers as of this writing)
  • ip4 or ip6 : specifies the IP address versioning. A single IP or a summarized subnet/supernet are acceptable
  • mx : allows the MX servers to send mail
  • include : allows a third-party to send emails on your domain’s behalf
  • a : allows the current IP to send mail
  • +all : allows any IP to send emails on this domain’s behalf
  • -all : allows no other IP’s to send emails on the domain’s behalf
  • ~all : allows all IP’s to send emails on your domain’s behalf, while messages would be marked
DKIM  (DomainKeys Identified Mail):

– What: it’s an email record associated with certain domains. These are composed of a selector and a public key. There is a private key that is installed on the email server, and is its alternate hashes are attached to email headers. Only the public key is added as the domain’s DNS record. The receiving email server performs keys matching to determine if the email is legitimate (not spam)
– Why: to prevent email spoofing
– Who: domain admins make these changes
– How: (source: Google)

  1.  Generate the domain key for your domain (For Google: https://
  2.  Add the public key to your domain’s DNS records
    • Example: 300 IN TXT "v=DKIM1; k=rsa; p=SOMEHASH" "MOREHASH"
  3.  Add DKIM onto email server(s) to start adding a DKIM signature to all outgoing messages
    • Example: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=google;
SRV (Service Records):

– What: service (SRV) records specify hosts and ports for services such as VoIP, instant messaging, domain proof of ownership, etc.
– Why: these records include IP address and port information that other type of DNS records do not have the option. Some Internet protocols require the use of SRV records in order to function.
– Who: domain admins manage these at DNS zone control panels
– How: SRV records must point to an A record (in IPv4) or an AAAA record (in IPv6), not CNAME. Below are some examples
_sip._tls.@ 100 1 443 (Microsoft Lync)
_sipfederationtls._tcp.@ 100 1 5061 (Microsoft Lync) 86400 IN SRV 10 5 5223 (xmpp server)

Azure: TrafficManager


Create a Traffic Manager profile for KimConnect using the Azure portal.


“Setup Phase”
1. Create a Traffic Manager profile
2. Add Traffic Manager endpoints
3. Test Traffic Manager profile
“Cutover Phase”
1. Edit public DNS entries for ADFS to point to the public IP of Traffic Manager
2. Validate successful routing of name resolution toward active endpoints
3. Optional: roll-back procedures

Setup Phase

1. Create Traffic Manager profile
a. Log into Azure as a Global Admin > search for “Traffic Manager profile” > Create
– Name: “WestUS-PROD-ADFS”
– Routing method: Priority
– Subscription: select the KimConnect’s current subscription
– Resource Group: Select KimConnect’s current Resource Group
– Location: select US West region

2. Add Traffic Manager endpoints
a. Search for the newly created Traffic Manager: Resource Groups > click on correct resource group (KimConnect- DNS) > click on the newly created RG (“WestUS-PROD-ADFS”) > Copy the DNS name ( > Endpoints > Add > input these settings
— Type: External endpoint
— Name: “WestUS-PROD-ADFS-IP1”
— FQDN or IP:
— Priority: 1
— Custom Header settings:
b. Select OK > repeat this procedure for the secondary endpoint “WestUS-PROD-ADFS-IP2” (associated with
c. Navigate back to the TM > Configuration > set Protocol = HTTPS, Port = 443, Path = /adfs/ls/idpinitiatedsignon.aspx > Save

3. Test setup
a. Edit the hosts file of the local Windows workstation/laptop to associate the Traffic Manager’s IP with the DNS record of ADFS. Alternatively, install Technitium DNS Server to create a CNAME record
b. Search for the newly created Traffic Manager > Overview > Subscription > verify that the “DNS name” was correctly set
c. Open a web browser and navigate the that URL and validate that it’s resolvable
d. select Overview > select “WestUS-PROD-ADFS-IP1” > select Disabled > Save > open a web browser to validate that URL still resolves
e. reverse local hosts file changes in part (a)

Cutover Phase

1. Edit public DNS entries for ADFS to point to the public IP of Traffic Manager
2. Validate successful routing of name resolution toward active endpoints
a. If necessary, ensure that the hosts file of the local Windows workstation/laptop has no entries related to ADFS
b. Open a web browser to validate that URL still resolves > Logins are functional as expected. If not, follow the roll-back procedures
c. Logon to Azure > search for the newly created Traffic Manager > select Overview > select “WestUS-PROD-ADFS-IP1” > select Disabled > Save > open a web browser to validate that URL still resolves
3. Optional: roll-back procedures
a. Edit the hosts file of the local Windows workstation/laptop to associate the Traffic Manager’s IP with the DNS record of ADFS
b. Logon to Azure > Edit public DNS entries for ADFS to point back to the previously set IPs

Storage: Cohesity(tm) Basics

The intention of this posting is to critique a product known as Cohesity, a trademark of Cohesity Inc. I recommend this system as a hybrid on-prem & cloud backup storage for infrastructures of 2020…

List of How-Tos:

Register Host: Protection > Sources > Register > select Physical Server > Input Hostname > click Register >
Creating Protection Job: Protection > Protect Jobs > select Physical Server (File Based) > input Job title > Source = Physical Servers > put check-mark next to the Physical Server name > Path to protect = path to protect > Add > Policy = (gold,silver,bronze) > storage domain = pick one > Protect
Recovery: Protection > Recovery > Recover > Files and Folders > type in the keyword for a file name > Download Now

Registering a Source: Protection > Sources > Register > Hypervisor > Select Hypervisor Source Type = VMware: Standalone ESXi Host or VMware: vCenter > Enter hostname or IP > input Username & Password > Register > check the Source column to verify success
Creating Custom Policy: Protection > Policy Manager > Create Policy > Set Title, Description, schedule, other options as desired > Create
Creating Protection Job and Setting Policy: Protection > Protection Jobs > Protect > Virtual Server > set Name, Description > select the correct Source corresponding to the standalone ESXi host > on the pop-up, configure the backup items by expanding the drop-down menus and selecting the correct items (guest VMs) > Add > Policy = pick from the list > Storage Domain = pick from list > click Protect > refresh the screen by clicking on Protection > Protection Jobs > verify backup progress
Recovering a VM: Protection > Recovery > Recover > VMs > search for the guest VM server label > select the correct item > Continue > set Task name (if necesary) > select Rename Recovered VMs > Add prefix = Recover- > Finish > verify that a new guest VM has been added to the VM host by login to that host
Recovering a File from within a VM: Protection > Recovery > Recover > Files and Folders > search for a file name > click on the correct item > click on Download Now or Recover to Server as desired

Creating a Cohesity NAS View: Platform > Views > Create View > input view name, description > set Storage Domain = the correct storage domain > set QoS policy = the correct policy (e.g. Backup Target Low) > Create View
Adding a Global Whitelist: Platform > Views > Global Whitelist > input Subnet IP, mask, and description > Add > click on SMB Authentication & Global Whitelists > confirm that the new whitelist has been populated
Accessing a View: Platform > Views > click on the three dots “…” corresponding to the desired item > select Copy SMB Path to Clipboard > open Explorer via Run: Explorer.exe > paste the path and navigate to it > put a file into that directory
File Filtering of a View: Platform > Views > click on an view item > click on Pencil icon to edit > Show Advanced Settings > set File Filtering to ON > add Blacklist or White file extensions > Add > Update View
Note: whitelisting a specific file extension means that every other extensions will be blacklisted
Views Protection: Platform > Views > click on an view item > click on Shield icon to create protection job > input job name, description > set Policy > Protect > validate by selecting the Protection tab > click on newly created protection job > inspect its Run Status
Recovering a File from a Protected View: Protection > Recovery > Recover > Files and Folders > select option Browse or Specify Path > search for a view by name > click on search result > locate a file to be recovered > Download File

Cloud Archive and Cloud Recover
Register an External Source: Protection > External Target > Register External Target > Set these values:
Name = SOMENAME, Description = null, Type = NAS,
NAS Host = DNS name of Linux NAS, Mount path = Local path on that NAS host (e.g. /mnt/Archive) > click Register > verify details of created external target

Creating a Archival Policy: Protection > Policy Manager > Create Policy > set Title = POLICY-NAME > Add Archival > set Archive to: Name of newly created external target > Create > visually verify that the new policy has been added onto the resulting list
Protection Job with Archival: Protection > Protection Jobs > Protect > Virtual Server > set Job Title = Name of Job > click Register Source > click drop-down menu to select Hypervisor Source Type (e.g. VMware: Standalone ESXi Host) > input Hostname, Username, Password > Register > Expand objects to reach the guest VMs nodes > put a check mark next to guest VM(s) to protect > Add > Policy = newly created external policy > set Storage Domain > Protect

Monitoring Archive Job: Protection > Protection Jobs > click on the time-stamp of the external job (not the hyperlink of the external job name) > select Cloud Archive Task > view its status
Restoring a File from Archive: Protection > Recovery > Recover > Files or Folders > search for a file by input its name > click on result > click on the Cloud icon (which will turn its color to blue) > Download Now > monitor recovery progress to see a result of Download-files_{time stamp} > click on that result > wait for file to be downloaded from the external cloud storage > click Download Files > Navigate to the client’s Downloads folder for the Download-files_{time stamp}.zip >

Recovery: Process > Protection job => clone snapshot to a view (original snapshot untouched) => Recover objects => destroy view/clone

Setup AWS S3 Archive: “The S3 Cloud provider has pre-requsites that need to be accomplished prior to being able to use it as an S3 external target. You will need to record the appropriate credential types needed for S3 access.”
Register a S3 External Target: Protection > External Target > Register External Target > set Title, Type = AWS > S3-IA, Bucket Name, Region, Access Key ID, Secret Access Key > Register > verify the newly registered external target
– Register a VMware Server:
Protection > Sources > Register > Hypervisor > Select Hypervisor Source Type = VMware: Standalone ESXi Host > input Hostname, Username, Password > Register > verify result as displayed at the bottom of the Sources page
CloudRetrieve Search and Recovery: Protection > Cloud Retrieve > Start Search > set External Target = AWS-S3-Archive (previously named target) > Protection Job Name = input the name of the guest VM to be recovered > Search > Wait for search results to populate > Stop Search > put a check mark next to a desired result > click on the drop-down menu to Select Storage Domain > pick a target domain to store retrieved backup from the cloud (e.g. sd-idd-ic) > Download > OK > wait for both the Meta-Data and Snapshot shows completion statuses, which is denoted as a green “Success” icon
Protection > Protection Jobs > Locate the Protection Job listed as “Inactive” (toward the bottom) > click on the three dots “…” (more action icon) > select Failover > set Failover to Source = {an ESXi host}, Policy = [Gold|Silver|Bronze] > click Failover Job and Continue to Recovery > set Networking Options = Attach to a new network = {select a VM Network} > Finish > Wait until the Cloud retrieval progress bar to show 100% completed > navigate back to Protection > Protection Jobs > verify that the job is no longer marked as Inactive
Verify successful recovery from ESXi Server: login to the target ESXi host > search for recovered server by name using the Search field

Register a S3 External Target: Protection > External Targets > Register External Target > input Title (e.g. AWS-S3-Archive), Type=[AWS S3-IA|Glacier|S3], Bucket name, Region, Access Key ID, Secret Access Key > Register
Register a VMware Server: Protection > Sources > Register > Hypervisor > input Hypervisor Source Type (e.g. Standalone ESXi Host), Hostname, Username, Password > Register
Create Protection Job and set Policy: Protection > Protection Jobs > Protect > Virtual Server > set Job name > click on drop-down menu to select Source > click on expand until the VM Guests are shown > put a check mark next to a desired guest VM > Add > set Policy = [Gold|Silver|Bronze] > click Edit next to the select Policy > scroll down to find Archival section > click on Add Archival > set Archive to = {name of AWS-S3-Archive} > Save > set Storage Domain = {name of Storage Domain (e.g. sd-idd-ic)} > leave everything else default > click Protect > scroll toward the bottom > click on the newly created job > monitor the Run Status until a green “Success” icon appears > click on that job to view details > select Cloud Archive Task tab > monitor its completion status bar for 100% completion

Recover a VM from a Cloud Archive: Protection > Recovery > Recover > VMs > type in the guest VM name in the search bar > click on the hyperlink of the desired result (not the check-mark next to such item) > click on the pencil icon to change Recover As options > click on the cloud icon (this icon won’t appear until archive jobs are completed prior) to recover VM from the Cloud Archive location > set Rename Recovered VMs = True, App Prefix = cloudRecover- > Finish > monitor Cloud retrieval progress for 100% completion > Connect to VMware host to verify that the guest VM has been recovered

Other Notes:

– Cohesity & Veeam interoperability:

Object: thing inside source were protecting
Protection Job: defines source and what to backup
View: provides storage location in a cluster
Policy: what to execute

Key Values
Consolidation: Consolidated functionality for Secondary Storage Needs
Data Mobility: Data in right place at right time
Simplicity: ez to deploy and use
Protection: many capabilities

Data Platform <=> Cohesity DataPlatform <=> Data Platform cloud edition

C2000,C3000,C4000 series
Cohesity DataPlatform
Dell,HPeProliant,Cisco UCS

Version 6.1.1 – current
Bare metal: Cristie software
Granular recovery: Ontrack
Tape backup: Qstar
Universal Backup Device: LaserVault

256 nodes tested
Global indexing and search
Multi-tenant with QoS
Sequential and random IO
Automated tiering
Unlimited scaling
Strictly consistent
Multi-protocol (NFS/SMB/S3)
Global dedupe
SnapTree limitless snaps & clones
Self healing

Disks => controller => Dual 10G => Storage Domains (view 1 to view X of NFS/SMB/S3, optional: inline dedupe / compression / encryption) => Partition => Cluster

Storage Domains

Data reduction/redundancy:
– Deduplication (inline or post process)
– Compression (inline or postprocess)
– Erasure coding vs replication factor

– Encrypt at partition and storage domain level
– Once enabled cannot be disabled: FIPS 140-2 encryption
– Note: JPEGs won’t compress well, MSWord data can compress to save space

– offloads cold data to cloud source
– Helps to keep free-space in check and onl retain hottest data (NAS customers only)
– Cannot be disabled on ce enabled at Storage Domains

– Quotas can be set at: Storage Domain level, Physical/Logical Quotas

Cohesity Protection


Sources => protection job => Protection Policy => Target => External targets (SMB/NSF/S3)

Backup Planning:
– Recovery Point Objective (RPO): how often to take backup copies
– Recovery Time Objective (RTO): How long to bring the backup data back
– Snapshots consistency: hypervisor, application, or crash level consistency
– Retention: long long to keep snapshots
– Scheduling: are other backups running in parallel? Full will take much longer than Incremental
– Bandwidth: 10GB or 1GB?
– CPU/RAM: don’t turn on backups during indexing

GUI Overview

More: Workbench (sample add-ons)
– Video Compressor: does just that
– Pattern Finder: read all files for a regex match

Protection Job:
– Auto-protection: single source, can be multiple objects per job
– App-consistent: pause transactions, take snapshots
– Crash-consistent:
– Route Device Map (RDM): are they mapped?

– Object types: VMs, file/folder, bare metal, etc.
– Locations: original, alternate, cloud
– File and folder considerations:
– Instant volume considerations: Ontrack
– Storage Volume: can recover directly to Source
– SQL: overwrite, renamed, alternate instance, clones

– Cohesity cluster
– Source cannot be contacted
– Destination location not available
– CloudTier gone

– Cohesity Cluster
– Source Registration
– Restore Destination

– Email and phone support availble
– Service tier level, priotity, severity

Data Protection:
– Hosts: physical, vitual
– Source => Target => Cloud (offsite)
– Vitual VMs: agent optional
– Need agent for physical hosts
– Need accounts with permissions: OS, SQL
– Set blackout windows so backups won’t run during those times; this is different than replication blackout window

Agent workflow:
– VMWare & AHV: agent not required
– Required: physical, Linux, AIX, Solaris, Virtual host w/ RDM attached, Hyper-V, Apps: SQL, Oracle, Exchange
– Protection > Sources > Download Cohesity Agent
– Cannot push agent to sources
– Can update agent remotely once installed
– Inclusions & Exclusions
– After agent is installed, access Cohesity control panel GUI to register host

Host Protection Jobs:
– Snap-based functionality
– File-based: can backup RDMs/Independent disks, can exclude at folder level, slower than block based, cant do bare metal recovery
– Block-based: VSS, can exclude volume level, quicker, can backup RDMs/Independent

Data domain naming convention examples:
– idd = inline dedup
– icc = inline compression
– ien = inline encryption

VMware Configuration
– Install VMTools on guests
– VM hardware version must be 9 or above
– Recommend: 100 VMs per node
– Network: Cohesity must be able to reach host VM
– Component options: policy (gold,silver,bronze), source/objects, storage domain, Advanced options (QoS, start-end date, excludes etc.)

Creating View:
– View means File Access Share
– All protocols selected (as default). If we choose all, this view will be read-only
– Whitelist (Access): affecting Read permissions
– Whitelist (File filtering): affecting Write permissions. specify file extension(s) allowed. All else will be blacklisted
– Blacklist: specify file extension(s) blocked. All else will be whitelisted
– File datalock: lock and protect files, set retention period, good for legal holds. Must use special security role to be able to alter data lock.
– Quota and Alerts: inherit from Storage Domain, override if necessary
– QoS:
Backup target = sequential workload
Test and Dev = random workload

– S3 bucket:
Copy access key: Admin > S3 > Access Key
– SMB:
Turn on enumeration

– Protecting view: make backup jobs

Replication & Disaster Recovery
– CloudArchive: replace tape for long term retention
– Protection Job => snapshot to Cohesity => remote targets
– Protection > Remote Clusters > Add Cluster > set VIP or Node IP Addresses (can be multiple), Username, Password > Connect
– Cluster options: Remote Access (eanble easy GUI access between replication partners)
– Replication settings: Distribute Load, Outbound compression, Enabled encryption, etc.
– Replication at the Protection Policy or Run Now level
– Backup job must completes before replication to proceed
– Changing between clusters by selecting a drop-down menu next to the Cohesity logo on the top left
– Blackout window: replication will queue until window expires then proceed
– Synching: both sides must replication enabled, pairing on
– CloudRetrieve retrieve from Cloud to different cluster
– CloudRecover retrieve from Cloud to original cluster that owns CloudArchive set