Project Servers Virtualization

Project Scope:
The scope of this project is limited to stabilizing production environment by ensuring high availability and setting robust disaster recovery on all server machines. Another benefit of project is to make all servers independent of hardware issues. Any troubleshooting of server failures can be isolated to software settings. This is also a precursor to future infrastructure expansion or conversion.
 
Objectives:
1. All servers will be virtualized and hosted in Irvine and Amazon Web Services
    a. Non Lotus Domino servers will be converted first
    b. Backup & recovery will be set for servers converted in step 1a
    c. Lotus Domino servers will be replaced by virtualized versions, running Windows 2012 Server operating systems. one service component at a time (SMTP, MAIL, WEB, APPS, etc.). Please note that the current available conversion tool only allows VMWare machines to be converted into AWS. There is no utility to convert from AWS to VMWare.
    d. Each virtual Lotus Domino server instance will be set to synchronize with AWS versions. Web versions will be primary. hosted versions will be backup.
    e. Disaster recovery plan shall be fully implemented on all virtual machines as well as AWS instances
2. All physical servers will be retired from Florida DC
3. All physical servers will be retired from Phoenix DC
4. Iron mountain shall be set to store backup data from all machines
 
 
Hardware Required:
a. 2 ESX servers, each with 256GB of RAM – 1 server is necessary for project initialization
b. 2 SAN appliances, each with 4 TB of available disk space with built in redundancy – 1 SAN appliance is necessary for project initialization
c. 2 stackable switches, capable of 802.3ad and VLAN
 
 
Software Required:
a. 2 ESX licenses
b. 2 Windows 2012 Server license
 
 
Subscription Required:
a. Iron Mountain
b. AWS Snapshoting Vendors
 
 
Considerations:
a. Production servers are currently consuming 239 GB RAM total
b. The non-production Dell PowerEdge R620 is capable of 1536 GB or RAM (source: http://www.dell.com/us/business/p/poweredge-r620/pd)
c. There are two Dell PowerEdge servers 620 inside the Irvine data center. One is not in production, and the other is MAIL3 (aliases: srvamx03 / srvadomino10 / srva-gjyf3w1). These two servers are capable of hosting the whole environment when loaded with at least 256 GB of RAM each.
D. ESXi 5.5 supports up to 4000 GB of RAM (source: https://communities.vmware.com/thread/458412)
E. The fall-back plan for each step of the migration is to preserve existing physical machines for 3 months. If any conversion procedure yields an error, the physical machine will immediately be re-commissioned.

Project Proposal: Servers Migration

Prepared for Planetary Systems Inc. (Short Version)

 

  1. Scope of Work

The Systems Team shall implement a virtualization technology to consolidate physical servers into robust virtual hosts. These hosts will be located at the corporate headquarter, which shall function as a replication site for our Amazon Web Servers. Furthermore, we shall upgrade our existing operating system and application software as part of this migration plan.

 

  1. Benefits
  • Most of the existing hardware’s warranties have expired. Instead of purchasing service plans for multiple machines, it would be a cost saving to decommission those machines.
  • Many of the servers are currently running on Windows Server 2003, which Microsoft has set to discontinue service by July 14, 2015. Software security would be compromised if we choose not to upgrade these servers as there shall be no new patches available after its expiration date.
  • At the completion of this project, Phoenix and Florida data centers are to be shut down. Decommissioning all servers at these satellite data centers is necessary for future physical infrastructure changes.
  • Systems maintenance shall be streamlined. Instead of managing many physical machines, it is much more efficient to focus on a few appliances. These few machines shall have full warranties with readily available technical support from the manufacturers.
  • Once servers are virtualized, it shall be possible to make on-demand backups of entire instances very quickly. Server crashes and production time interruption shall be minimized.
  • Future Lotus Domino conversion progress can be made with absolute confidence that any changes can be reversed in minutes.
  • Data security will be greatly improved. Tape drives and its associated cost of maintenance will be substituted with SAN to SAN replication protocols as well as Amazon virtual tape libraries.

 

  1. Schedule of Deliverables

This project shall be divided into four phases, and the timeline for each phase’s objective is tentatively set as per below. The estimated completion of each step may not be accurate as of yet. Thus, this scheduling should only be treated as an overview. The Systems Team shall revise these estimates as the project progresses.

 

Work Schedule

   

Lead Time
(days)

Start Date

End Date

Resources

3.1

Project Proposal and Approvals

       
 

  Proposal Document

10

6/1/2015

6/15/2015

Tom, Jerry, Kim

 

  Purchasing Process

10

6/15/2015

6/29/2015

Tom, Jerry, Kim

3.2

Physical Infrastructure servers Setup

 

6/29/2015

6/29/2015

 
 

  Hardware Installation

5

6/29/2015

7/6/2015

Jerry, Kim

 

  Software Installation

5

7/6/2015

7/13/2015

Jerry, Kim

 

  Infrastructural Servers Installation

5

7/13/2015

7/20/2015

Jerry, Kim

3.3

Tier 2 Servers Migration

 

7/20/2015

11/08/2015

Tom, Jerry, Mickey, Kim
           
           
           
           
           
           
           
           

3.4

Tier 1 Servers Migration

 

11/9/2015

06/12/2015

Jerry, Mickey, Kim
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
 

Setup Backups

5

6/13/2016

6/20/2016

Kim

Please note that deliverable (3.3) shall be expanded as creations of each of those instances with new Windows 2012 Server OS and new Domino Server setup in VMware. Then, those instances shall be converted into Amazon Web Services’ instances. The mark of completion of a server function shall be when there is a replicating pair of servers between AWS (cloud) and VMWare (local) instances.

  1. Purchase Requirements

We need to purchase hardware and software before initializing this project. Below is a comparison of the options of the purchase request:

 

Required Items:

Option 1

 Price

Option 2

 Price

Option 3

 Price

Hardware: Two (2) Servers

Repurposing R620’s

 $                      –  

Dell.com: R630

 $     8,066.30

Dell.com: R630

 $     8,066.30

Server Memory: 48 modules
of 16GB, each

Newegg Kingston

 $     6,671.52

Newegg Kingston

 $     7,535.52

Dell.com: RAM

 $  13,053.12

Two (2) Windows Server 2012
Data Center Licenses

Amazon Marketplace

 $     4,000.00

SoftwareMedia.com

 $     6,997.98

Microsoft.com

 $  12,312.00

Vmware: vSphere Essentials Plus Kit

SoftwareMedia.com
1 year support

 $         944.00

SoftwareMedia.com
3 year support

 $     2,492.16

VMWare

 $     5,439.00

Total

 

 $  11,615.52

 

 $  25,091.96

 

 $  38,870.42

 

  1. Responsibilities

Each personnel being listed as a resource on the work schedule shall be responsible for such deliverable(s). The overseeing project manager shall be Management, which consists of the NAMED, Vice President and MICKEY, Assistant Vice President.

 

  1. Exhibits

 

Servers List

Server Name (alias)

Make/Model

Warranty

Service Tag

O.S.

OS End
Of Life

           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           

 

VMWare & AWS Integration

AWS Connector for vCenter – migrate VMs to AWS instances
How To: http://aws.amazon.com/ec2/vcenter-portal
Download vCenter plug-in: https://s3.amazonaws.com/aws-connector/AWS-Connector.ova
Cost: $0
 
AWS Storage Gateway – cloud tape backup
How to: http://docs.aws.amazon.com/storagegateway/latest/userguide/GettingStartedDownloadVM-common.html
Download: https://console.aws.amazon.com/storagegateway/home?region=us-west-1
Cost: $125 per month + storage fee + data transfer fee
Default administrator account: sguser / sgpassword

Dell OpenManage and ESXi 6.0 Integration

Install OpenManage on ESX 5.1 to 6.0:
———————————————–
Check to see whether OpenManage is already installed:
esxcli software vib list
 
If OpenManage is not installed, obtain the link to the correct VIB package: http://de.community.dell.com/techcenter/extras/w/wiki/1490.dell-openmanage-server-administrator-for-esxi-6-x-and-older
 
Install OpenManage onto ESXi:
Connect via SSH into ESXi and run these commands
cd /tmp
wget http://ftp.euro.dell.com/FOLDER02867568M/1/OM-SrvAdmin-Dell-Web-8.1.0-1518.VIB-ESX60i_A00.zip
chmod 777 OM-SrvAdmin-Dell-Web-8.1.0-1518.VIB-ESX60i_A00.zip
esxcli software vib install –d OM-SrvAdmin-Dell-Web-8.1.0-1518.VIB-ESX60i_A00.zip
rm OM-SrvAdmin-Dell-Web-8.1.0-1518.VIB-ESX60i_A00.zip
———————————————–
 
Turn on CIM service and SSH on the ESX server:
Vsphere Client >> Configuration >> Security Profile >> click Properties on the Services section >> select CIM Server >> click Options >> select “Start and stop with host” >> OK >> Repeat for the SSH service >> Ensure CIM Service and SSH is allowed through the ESX firewall
 
Install DSET Collector on a remote computer:
Download and install from this link: http://www.dell.com/support/contents/us/en/19/article/Product-Support/Self-support-Knowledgebase/enterprise-resource-center/Enterprise-Tools/dell-system-e-support-tool
 
Run DSET Collector from the client:
On a client computer running Windows 7 >> Start >>  All Programs >> DSET >> right-click DSET CLI, run as Administrator >> type in command: dellsysteminfo.exe -s 10.10.10.1 -u root -n root/dcim/sysman >> press enter >> wait for the report to generate and dumped onto the existing user’s desktop
 
Install Dell OpenManage Server Administrator (OMSA) Managed Node
Download and install the driver from this link: http://www.dell.com/support/home/us/en/19/Drivers/DriversDetails?driverId=20V28
Create a short-cut to use Internet Explorer to open the link, such as: “C:\Program Files\Internet Explorer\iexplore.exe” https://localhost:1311
Use the newly created short-cut to connect to the remote Dell Server to administer its hardware and firmware

Some Useful VMWare ESX CLI commands

#Clear current session terminal messages:
clear

#Review shell command history:
vi /var/log/shell.log

#Round robin for EQLOGIC HBA ports:
esxcli storage nmp satp set --default-psp=VMW_PSP_RR --satp=VMW_SATP_EQL ; for i in `esxcli storage nmp device list | grep EQLOGIC|awk '{print $7}'|sed 's/(//g'|sed 's/)//g'` ; do esxcli storage nmp device set -d $i --psp=VMW_PSP_RR ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -I 3 -t iops ; done
esxcfg-advcfg -s 0 /Net/TcpipDefLROEnabled

#Check storage HBA connections
esxcli storage nmp device list

#Restart Management Agents to resolve hung-task issues
/etc/init.d/hostd restart
/etc/init.d/vpxa restart

#Change thin volume to thick:
Browse to the VMDK file >> right-click, inflate >> ssh into ESX host >> check VMID: vim-cmd vmsvc/getallvms >> reload VM: vim-cmd vmsvc/reload Vmid

ESX & Enterasys LAG Configurations

Requirements for ESX and LACP compatibility of Enterasys core switches:
1. static LAG
2. vlan egress port tagged
3. Single port lag enabled
4. IP hash (trunking) mode in ESX vSwitch
5. vNIC for virtual machines must support VLAN traffic (E1000 nic not supported?)
6. Promiscuous mode for all vSwitch must be set to ON

Check current configuration:
# show config
...
set vlan egress 20 lag.0.1-3;ge.1.1-5,7-13,15-24,26-27,29;ge.2.1-30 untagged
...

Check lacp statuses:
# show lacp
...
Aggregator: lag.0.x
...
Attached ports
...

Check specific port lag:
# show port lacp port ge.2.x status detail

Configure static lag:
https://community.extremenetworks.com/extreme/topics/configuring_a_static_dynamic_lag_on_a_securestack
If Enterasys has a newer firmware...
# set lacp static lag.0.1 key 1 ge.2.5-8
If Enterasys has an older firmware...
Set dynamic lag:
# set port lacp port ge.2.5-8 aadminkey 418
# show lacp lag.0.1
OR set static lag:
# set lacp static lag.0.1 ge.2.5-8 418
# set lacp static enable

Example for newer firmware:
set single port lag enabled
set port alias lag.0.1 ESX01
set port alias lag.0.2 ESX02
set port alias lag.0.3 ESX03
set port alias lag.0.4 ESX01-vMotion
set port alias lag.0.5 ESX02-vMotion
set port alias lag.0.6 ESX03-vMotion
set lacp aadminkey lag.0.1 1
set lacp aadminkey lag.0.2 2
set lacp aadminkey lag.0.3 3
set lacp aadminkey lag.0.4 4
set lacp aadminkey lag.0.5 5
set lacp aadminkey lag.0.6 6
set lacp static lag.0.1 key 1 ge.1.1-2;ge.2.1-2
set lacp static lag.0.2 key 2 ge.1.3-4;ge.2.3-4
set lacp static lag.0.3 key 3 ge.1.5-6;ge.2.5-6
set lacp static lag.0.4 key 4 ge.1.7;ge.2.7
set lacp static lag.0.5 key 5 ge.1.8; ge.2.8
set lacp static lag.0.6 key 6 ge.1.9;ge.2.9
set port jumbo enable ge.1.1-9;ge.2.1-9

set vlan egress 20 lag.0.1;ge.1.1-2;ge.2.1-2 tagged <== tag ESX01 servers subnet lag
set vlan egress 20 lag.0.2;ge.1.3-4;ge.2.3-4 tagged <== tag ESX02 servers subnet lag
set vlan egress 20 lag.0.3;ge.1.5-6;ge.2.5-6 tagged <== tag ESX03 servers subnet lag
set vlan egress 20 lag.0.1-6;ge.1.1-9;ge.2.1-9 tagged <== tag all 6 lags (if everything is set)
set vlan egress 20 lag.0.4-6;ge.1.7-9;ge.2.7-9 tagged <== tag vMotion lags

Note: VMWare E1000 and "Flexible" adapters will not be able to interface with trunked ports; thus, all virtual machine instances must be using vNIC models E1000E or VMXNET3 before setting up tagged vlans on the core switch. The port group properties in the virtual switch must be set to tag the subnet to the appropriate VLAN ID (e.g. 20), promiscuous mode, and IP hash round robin algorithm

Example of a HipChat Server Installation

FQDN: hipchat.kimconnect.com
Internal IP: 10.10.100.205
Public IP: 12.12.12.12
 
Firewall configurations:
inbound TCP 443
inbound TCP 80
inbound TCP 22
inbound TCP 5222-5223
outbound TCP 25
outbound TCP/UDP 53
outbound TCP/UDP 123
outbound TCP 443 to destinations: marketplace.atlassian.com, barb.hipch.at, hipchat-server-stable.s3.amazonaws.com, hipchat-dependencies-stable.s3-website-us-east-1.amazonaws.com, hipchat-dependencies-stable.s3.amazonaws.com
outbound TCP 80
 
Default administrator: admin / hipchat
hipchat network -t   //check current IP
hipchat network -m static -i 10.10.100.205 -s 255.255.255.0 -g 10.10.100.254 -r 8.8.8.8   //set static IP
Locate your domain certificate, {domain_name.pem}, and private key files, kimconnect.key files
Open a browser and navigate to https://hipchat.kimconnect.com
Follow the wizard to complete the initialization

Install ESX 5.5 on 5th Generation NUC

Download the following:
- ESX 5.5 ISO
- ESXi-Customizer v2.7.2 (http://www.v-front.de/p/esxi-customizer.html)
- net-e1000e-3.1.0.2-glr-offline_bundle.zip (https://vibsdepot.v-front.de/wiki/index.php/Net-e1000e)
- uNetbootin

Follow these steps:
- Edit ESXi-Customizer.cmd, edit lines 593-595 (source: https://communities.vmware.com/thread/483693?start=15&tstart=0)
--------------------

findstr /I /L "<payload" %1 | "%SED%" -e "s#.*<payload name=\"#set %2PayloadName=#I;s#\".*##I" >>%3
echo.>>%3
findstr /I /L "<payload" %1 | "%SED%" -e "s#.*<payload .* type=\"#set %2PayloadType=#I;s#\".*##I" >>%3

to this:


findstr /I /R "<payload.*name" %1 | "%SED%" -e "s#.*<payload name=\"#set %2PayloadName=#I;s#\".*##I" >>%3
echo.>>%3
findstr /I /R "<payload.*name" %1 | "%SED%" -e "s#.*<payload .* type=\"#set %2PayloadType=#I;s#\".*##I" >>%3
--------------------
- Run ESXi-Customizer.cmd to use net-e1000e-3.1.0.2-glr-offline_bundle.zip with ESX 5.5 ISO to generate a customized ISO with the Intel NIC driver
- Install ESXi onto NUC
- Access ESXi to enable SSH
- SSH into ESXi to run these commands (source: http://www.virten.net/2015/02/how-to-install-esxi-on-5th-gen-intel-nuc-nic-and-ahci-workaround/)
--------------------
cd /tmp
mkdir ahci
cd ahci
vmtar -x /bootbank/sata_ahc.v00 -o sata_ahc.tar
tar xvf sata_ahc.tar
rm sata_ahc.tar
echo "regtype=linux,bus=pci,id=8086:9c83 0000:0000,driver=ahci,class=storage" >> etc/vmware/driver.map.d/ahci.map
tar cvf sata_ahc.tar etc usr
vmtar -c sata_ahc.tar -o sata_ahc.vgz
mv sata_ahc.vgz /bootbank/sata_ahc.v00
--------------------
- Reboot ESXi
- use vSphere Client to add Storage
- Done!

How to Clone Virtual Machine in ESXi without using vSphere Web Client (vCenter)

Update: pure command-line updated article is available in this new blog.

SSH into ESXi host

#Find volume name:
ls -la /vmfs/volumes

[admin@esx2:~] ls -la /vmfs/volumes
total 3076
drwxr-xr-x    1 root     root           512 May 27 01:35 .
drwxr-xr-x    1 root     root           512 May  4 01:23 ..
drwxr-xr-x    1 root     root             8 Jan  1  1970 508bb77a-fb9ab146-3b90-                             4a26b3a5efb4
drwxr-xr-x    1 root     root             8 Jan  1  1970 5ed456f4-f38365db-11e9-                             94c691ac4caa
drwxr-xr-t    1 root     root         73728 May 27 00:16 5ed456fa-5daa2d25-e3bd-                             94c691ac4caa
drwxr-xr-x    1 root     root             8 Jan  1  1970 5ed456fa-88463044-10c6-                             94c691ac4caa
drwxr-xr-t    1 root     root         73728 Jul 30  2020 5f222533-604a4190-0413-                             94c691ac4caa
lrwxr-xr-x    1 root     root            35 May 27 01:35 Micron-SSD-476GB -> 5f2                             22533-604a4190-0413-94c691ac4caa
lrwxr-xr-x    1 root     root            35 May 27 01:35 Pioneer-SSD-216GB -> 5e                             d456fa-5daa2d25-e3bd-94c691ac4caa
drwxr-xr-x    1 root     root             8 Jan  1  1970 ac080970-d1efc9cf-c76a-                             77500ed2402a

#List all instances in a volume:
volumeName=5ed456fa-5daa2d25-e3bd-94c691ac4caa
ls -la "/vmfs/volumes/$volumeName"

[admin@esx2:~] ls -la /vmfs/volumes/5ed456fa-5daa2d25-e3bd-94c691ac4caa
total 1476864
drwxr-xr-t    1 root     root         73728 May 27 00:16 .
drwxr-xr-x    1 root     root           512 May 27 01:41 ..
-r--------    1 root     root       3866624 Jun  1  2020 .fbb.sf
-r--------    1 root     root     134807552 Jun  1  2020 .fdc.sf
-r--------    1 root     root     268632064 Jun  1  2020 .jbc.sf
-r--------    1 root     root      16908288 Jun  1  2020 .pb2.sf
-r--------    1 root     root         65536 Jun  1  2020 .pbc.sf
-r--------    1 root     root     1074331648 Jun  1  2020 .sbc.sf
drwx------    1 root     root         69632 Jun  1  2020 .sdd.sf
-r--------    1 root     root       7340032 Jun  1  2020 .vh.sf
drwxr-xr-x    1 root     root         73728 May 27 00:16 ISOs
drwxr-xr-x    1 root     root         73728 May 27 01:31 Web02

#Create new folder to store a new VM instance
volumeName=5ed456fa-5daa2d25-e3bd-94c691ac4caa
newMachineName=Web03
destinationDirectory="/vmfs/volumes/$volumeName/$newMachineName"
mkdir $destinationDirectory

#Clone an existing machine with one disk file:
volumeName=5ed456fa-5daa2d25-e3bd-94c691ac4caa
machineName=Web02
newMachineName=Web03
vmkfstools -i "/vmfs/volumes/$volumeName/$machineName/$machineName.vmdk" "/vmfs/volumes/$volumeName/$newMachineName/$newMachineName.vmdk" -d thin

#Clone an existing machine with a snapshot
vmkfstools -i /vmfs/volumes/$volumeName/$machineName/$machineName-000001.vmdk /vmfs/volumes/$volumeName/$newMachineName/$newMachineName.vmdk -d thin

[admin@esx2:~] vmkfstools -i "/vmfs/volumes/$volumeName/$machineName/$machineName.vmdk" "/vmfs/volumes/$volumeName/$newMachineName/$newMachineName.vmdk" -d thin
Destination disk format: VMFS thin-provisioned
Cloning disk '/vmfs/volumes/5ed456fa-5daa2d25-e3bd-94c691ac4caa/Web02/Web02.vmdk'...
Clone: 100% done.

# The task of creating a virtual machine from VMDK requires plug-ins that are not included in the default instance of ESXi. Hence, it is necessary to perform that task via the GUI.

To perform the VM registration task via GUI:

Access vSphere Client > right-click the host > select New Virtual Machine >> select Custom > input new_machine_name > next > next > next > select the correct OS, next > next > delete the default hard drive > click on Add hard disk > select “existing hard disk”

Browse to the newly cloned VMDK file > OK > next > next > Finish > power on new VM

New virtual machine finishing screen

If Windows, run: C:\windows\system32\sysprep\sysprep.exe
If Linux, run: nmtui to change IP and hostname

Domino Server Restore Procedure

1. In AWS, verify that a new instance of a Domino Server has been launched
Log onto AWS >> EC2 >> click Instances
 
2. Obtain the new instance’s IP address
Right-click instance name >> Networking >> Manage Private IP Addresses >> note its Private IP
 
3. Reset Computer Machine Password
 
 
 
psexec \\aws-dominoserver01 -e -h -u aws-dominoserver01\mailadmin -p password net stop “IBM Domino Server (DLotusDominodata)”
>> find command to set service to manually start
>> change to workgroup
>> reboot
>> join domain
>> reboot
>> make sure DNS has new server IP address
>> check to see if replication continues

Sample: AWS & Satellite Subnets

 
  Web Tier App Tier Data Tier
Zone A 172.31.0.0/20 172.31.64.0/24 172.31.128.0/24
Zone B 172.31.16.0/20 172.31.80.0/24 172.31.144.0/24
 
 
AWS Subnets:
Public Subnet 0b        172.31.0.0/20    0/20.0.31.172.in-addr.arpa
Private Subnet 144c    172.31.144.0/24    144/24.144.31.172.in-addr.arpa
Private Subnet 16c        172.31.16.0/20    16/20.16.31.172.in-addr.arpa
Private Subnet 80c        172.31.80.0/24    80/24.80.31.172.in-addr.arpa
Private Subnet 128b    172.31.128.0/24    128/24.128.31.172.in-addr.arpa
Private Subnet 64b        172.31.64.0/24    64/24.64.31.172.in-addr.arpa
 
Subnet Expressions:
192.168.0.0/16
172.31.0.0/19
172.31.64.0/24
172.31.80.0/24
172.31.128.0/24
172.31.144.0/24
 
Subnet Strings:
192.168.0.0/16, 172.31.0.0/19, 172.31.64.0/24, 172.31.80.0/24, 172.31.128.0/24, 172.31.144.0/24
 
Summarized Routes:
push route “192.168.0.0 255.255.0.0”; push route “172.31.0.0 255.255.224.0”; push route “172.31.64.0 255.255.255.0”; push route “172.31.80.0 255.255.255.0”; push route “172.31.128.0 255.255.255.0”; push route “172.31.144.0 255.255.255.0”

Network Zones

Corporate Head-Quarter:
DMZ:

  1. Extranet: Vendors
  2. Web: Front-end Sites (a) Web (b) Application (c) Data
  3. Public: Public, satellite VPN connections 

Internal:

  1. Warehouse: (a) scanners ( b) guests (c) 
  2. Offices (departmental VLAN seggregation): (a) Executives (b) Accounting (c) Sales-Marketing (d) Customer-Service (e) IT-Infrastructure (f) DEV (g) InfoSec (h) Returns (j) R-and-D (k) Production
  3. Servers: (a) Data (b) Application (c) Front-End
  4. Printers

Cloud Amazon Web Services & Microsoft Azure:       
1. Web Tier: Availability Zone 1 & 2
2. App Tier:  Availability Zone 1 & 2      
3. Data Tier Availability Zone 1 & 2