This repository is an Ansible-based network automation solution for the topology in the diagram:
- HQ Office – two-tier (Core/Distribution + Access)
- Data Center – three-tier (Core + Distribution + Access)
- IPsec VPN Site-to-Site between HQ and DC firewalls
The design is:
- Idempotent – uses Ansible network modules, not raw CLI where possible.
- Data‑driven – all device-specific data is in
host_vars/andgroup_vars/. - Production-style – base / L2 / L3 / security separated into roles.
⚠️ You must lab‑test and adapt IPs, passwords, and policy to your real environment before touching production.
The credit of this topology goes to Randy Pratma Putra (https://www.linkedin.com/in/randy-pratama-putra/)
Two-Tier HQ & Three-Tier Data Center Automation using Ansible - Session 2 - Ansible Inventory Design
Two-Tier HQ & Three-Tier DataCenter Automation using Ansible - Session 3 - Ansible Playbooks & Roles
- Python 3.10+
- Ansible 2.16+
- Cisco IOS / IOS-XE on switches and routers
- Cisco ASA/FTD for firewalls (VPN role is a skeleton to be adapted)
Install collections:
ansible-galaxy collection install -r requirements.ymlGroups (from inventory/hosts.yml):
hq_core– Core-01, Core-02hq_access– Switch-01…Switch-04dc_core– Core-SW01, Core-SW02dc_dist– Dist-SW01, Dist-SW02dc_access– Access-SW01…Access-SW04firewalls– Firewall-HQ, Firewall-DC
Each host has its own host_vars/<hostname>.yml file describing:
- VLANs & SVIs
- L3 links (
/29interconnects as per the diagram) - Access ports (for PCs/Servers)
- Trunks (for uplinks)
- IGP (OSPF) and default route
base– hostname, domain, NTP, syslog, SNMP, timezonelayer2– VLANs, access interfaces, trunk interfaceslayer3– SVIs, routed interfaces, OSPF, static default routevpn_firewall– hostname + inside/outside interface config, with a clear TODO for IPsec policy
Roles are intentionally small and composable so they can be easily extended (QoS, port‑channels, security, etc.).
playbooks/hq.yml– Appliesbase,layer2,layer3to HQ core;base,layer2to HQ access.playbooks/dc.yml– Applies roles to DC core, distribution, access.playbooks/firewalls.yml– Applies firewall base + IP interface config.playbooks/site.yml– Orchestrator that imports all of the above.
Example usage:
# Dry run with diffs
ansible-playbook playbooks/site.yml -C --diff
# Actual deployment
ansible-playbook playbooks/site.ymlThis repo does not hardcode sensitive credentials:
- Username is set in
inventory/hosts.yml. - Password is taken from the environment variable
NET_PASSWORD.
Example:
export NET_PASSWORD='YourStrongPassword!'
ansible-playbook playbooks/site.ymlSNMP communities, NTP, syslog, and domain name are in group_vars/all.yml.
Adapt these to your org standards (SNMPv3, TACACS/RADIUS, etc.) before production.
Under tests/ you’ll find placeholders for:
pyATS– for VLAN, OSPF, and routing validation.Batfish– for pre‑change routing policy analysis.
These are not fully implemented (they depend on your tooling), but the structure is ready to plug into CI/CD.
-
Clone this repo to a lab environment.
-
Replace mgmt IPs in
inventory/hosts.ymlwith your real device IPs. -
Adjust VLAN IDs, subnets, and OSPF areas if your implementation differs.
-
Run:
ansible-lint ansible-playbook playbooks/site.yml -C --diff
-
Validate end‑to‑end reachability and routing.
-
Only then run without
-Cin a change window.
This is a production‑style repository: structure, idempotency, and separation of concerns are ready; just adapt the data and any local standards (AAA, logging, naming conventions, etc.).



