DMVPN is a secure network that exchanges data between sites without needing to pass traffic through an organization's headquarter virtual private network (VPN) server or router.
Networking
Things I am studying for my CCNP and other real life stuff
Tuesday, May 7, 2019
Multicast Source Discovery Protocol (MSDP)
During one of my Incredibly fun projects with BGP, I had to quickly and efficiently research, learn, and implement a new ( to me ) technology. While setting up my side of a BGP peer I also had to set up a Multicast Source Discovery Protocol (MSDP) peer. Of course, the service provider was like oh and not forget the MSDP config. Because the service provider uses Juniper equipment I start my search on the Juniper help pages.
The Multicast Source Discovery Protocol (MSDP) is used to connect multicast routing domains. It typically runs on the same router as the Protocol Independent Multicast (PIM) sparse-mode rendezvous point (RP). Each MSDP router establishes adjacencies with internal and external MSDP peers similar to the way BGP establishes peers. These peer routers inform each other about active sources within the domain. When they detect active sources, the routers can send PIM sparse-mode explicit join messages to the active source.The Juniper TechLibrary did a really great job of spelling it out for me. After that, the configuration was pretty smooth, with the exception of a mistyped password. luckily a quick look in the log showed an MSDP authentication error :)
blog.xuite.net/flytw1/ |
Monday, May 6, 2019
What is a TACLANE?
If you read my resume you will see that I work with “Tactical Local Area Network Encryption (TACLANE) devices”, or “Type 1 Encryptor “. TACLANES are High Assurance Internet Protocol Encryptor (HAIPE) Type 1 encryption devices that comply with the National Security Agency's (NSA) HAIPE IS (formerly the HAIPIS, the High Assurance Internet Protocol Interoperability Specification). I know that is a mouth-full of alphabet soup. What it means is these devices are typically used as secure gateways that allows two or more enclaves to exchange data over an untrusted or lower-classification network. The cryptography used is Suite A and Suite B, also specified by the NSA as part of the Cryptographic Modernization Program.
HAIPE IS is based on IPsec with additional restrictions and enhancements. One of these enhancements includes the ability to encrypt multicast data using a "preplaced key". This requires loading the same key on all HAIPE devices that will participate in the multicast session in advance of data transmission.
Tuesday, February 26, 2019
Tuesday, July 24, 2018
Monday, July 23, 2018
Tuesday, May 8, 2018
Cisco Softphone register to a router in GNS3 running CME
I recently built a nice topology in GNS3. So I figured why not see if I could get one on the routers to run CME, and then get a Cisco softphone to register to the CME.
I connected the CME router to the laptops loopback and IPed the loopback and router interfaces. Then I installed the Cisco IP communicator (Soft IP Phone). During the softphone configurations, I put the router’s IP as the TFTP server IP.The 1st try was a No-Go ☹…. The CME I am running is very basic setup, with no phone loads. I had to connect to a CUCM to get the Softphone to load. I re-configured the Softphone’s TFTP setting to point to the CUCM and it downloaded its needed files and register to the CUCM. I re-re-configured the TTFP setting pointing back to the router’s IP and …JOY 😊
After the phone registered I had to add the mac-address to the ephone and configure a dn.
!
dn 111
!
ephone 3
mac-address 0026.b94f.adfb
button 1:1
Monday, April 30, 2018
Great CCNP LAB
This Lab was great fun to build. I started with the Multi-area OSPF then just keep on adding
Then adding router RIP and redistributing the rip routes at router ABR-ASBR.
Then adding router A10 with a Virtual-link.
Then adding routers R1-IPv6 and R2-IPv6 using OSPFv3.
Then Tunning IPv6 over IPv4 from A0 to R1-IPv6 across Tunnel 1 and using OSFPv3.
Then Tunning IPv4 over IPv6 from R2-IPv6t to R1-IPv6 across Tunnel 2 using RIP.
Tunnel 1 just uses local-link IPv6 addresses for the tunnel.
Tunnel 2 IPv4 addresses are not advertised.
Great CCNP-Lab
Author: KLiv GNS3 file: GNS3 2.1.6 Lab
Saturday, April 8, 2017
Sunday, July 3, 2016
Re-Working a remote MetroE Site
Re-Working a remote MetroE Site
We added about 10 Computers and put the whole site on Cisco VoIP phones.
Before: You can see the Key system on the wall
After: You can see the router in the new rack.
The router has an FXO card for two B1 Lines used for 911, and Survivable Remote Site Telephony (SRST).
The Analog Gateway with 2 FXS ports for FAXing was installed after the picture was taken.
We added about 10 Computers and put the whole site on Cisco VoIP phones.
Before: You can see the Key system on the wall
After: You can see the router in the new rack.
The router has an FXO card for two B1 Lines used for 911, and Survivable Remote Site Telephony (SRST).
The Analog Gateway with 2 FXS ports for FAXing was installed after the picture was taken.
Wednesday, August 5, 2015
Just some Fun at Home w/ CUCM
Just some Fun at Home w/ CUCM
CUCM on my Samsung Tab S
Cisco Phone Registered to My CUCM
Device Page of My CUCM om Samsung Tab S
Sunday, July 20, 2014
Sunday, June 22, 2014
Left a really Good Job
Changed jobs this week.
So technically I'm unemployed this weekend.
The old job was one of the most challenging and rewarding jobs I've ever had, and I've had a lot!
I met and worked with some great networks and people.
This is what they said when I told them I was leaving.
This is the crew!
(They asked to join the witness protection program) |
Tuesday, May 20, 2014
Cisco TAC to the Rescure
We Upgraded 12 Stacks of Cisco 3850 Switches today.
Which is no Big deal except...Cisco had to come and save the day.
We copied over and installed the new 3.3.3 IOS which was supposed to Fix a bug in the 3.3.1 version we were running. All that was needed was a reboot, which I initiated a Change Control Request for, and which had been approved for 06:00 Monday, as it would require a small disruption. @06:00 Monday morning I was ready to reboot the 1st stack. The site staff had asked to reboot one stack first just to ensure the upgrade would be compatible and that the switches in the stack would come back up. Always wanting to be the customer service driven professional that I am, I agreed. So after the 1st stack rebooted and came back up I checked the EtherChannel and the Logs, and once I notified the site staff, I rebooted the 11 other stacks of switches. All told at about 06:30 all 12 stacks were back up and by 06:45 I had logged into and checked all 12, with no issues noted. Or so I thought, because just before lunch we start getting email notification that some APs at this same site were disassociated from Controller and then rejoining just a minute later. After scouring the logs we noted that it was only a handful of 1200 series APs and that they were losing PoE and rebooted, we also notice an "Imax" error in the log for the interface just before the AP lost power and reboot. By lunchtime, I had not found anything pertinent on the web so I decided to call in the Big Guns, Cisco TAC. I used to think TAC was just for warranty, boy was I wrong! Cisco TAC will, of course, help you RMA a bad device, but I have used them more for figuring out how to configure a device or like this case figuring out a bug in the software. That is why Cisco calls it a SmartNet agreement, not just an extended warranty. After about an hour of getting the Cisco engineer up to speed, and a WebX, where he can see my screen and even drive if needed, he was able to pinpoint the bug AND get me a workaround. The issue was the older AP and the new IOS on the newer switches, and the workaround is to statically configure the MAX power for the PoE to 15.4
Stack(config)#int g4/0/46
Stack(config-if)#Des ***Old AP
Stack(config-if)#power inline static max 15400
Stack(config-if)#shut
Stack(config-if)#no shut
These are older APs nearing EoL so it is not an Issue with our other site as we have replaced most of these APs.
THANK YOU Cisco!!
Thursday, May 1, 2014
A Picture is worth a thousand words!
This is footage from a IDF Security Camera
After we didn't plug the camera into the correct port. SiteOPs called me the next morning and asked if I knew these two yahoos.....
Thursday, April 17, 2014
Tuesday, April 15, 2014
Troubleshooting 101
Spent 2 ½ days trying
to figure out why a customer's VPN solution
was not preforming the way the vendor had promised. Never mind the fact that the networking team had
no say in the installation or operation of the solution. Spent one whole day
just tracking down the laundry list of issues the customer was having and any
troubleshooting steps the vendor and customer had done. The second morning we
traced the path checking routing and firewall ACLs. No issues, so I asked the
customer if I could have a client to generate traffic on demand (and also check
it’s setting). The customer was glad for any help and brought me a client.
Checked firewall and IP setting, no issues. The third morning the vendor was a little
upset that I was looking at the client, they were sure it was the network.
After some back and forth, their tech had to go to lunch, and created an
account on the server so I could do some tracert, ping, ect. 10 minutes late I
had it figured out.
So, 1st thing I did was a tracert to a connected
client, and what do I see……………….
I checked the server’s
IP settings and the subnet mask was incorrect, as soon as it was corrected ALL
the issues the customer had cleared up, even some they had but on the back
burner.
The clients have a /21 bit mask, and the server was using a
/24, the vendor’s tech swore up and down he had not changed it, and that it had been
working up to that point. He even wrote an email saying “How strange it was
that it stopped working now” I wrote back it was strange it had worked at all.
General troubleshooting steps
1. Define
the problem.
2. Gather
detailed information.
3. Consider
probable cause for the failure.
4. Devise
a plan to solve the problem.
5. Implement
the plan.
6. Observe
the results of the implementation.
7. Repeat
the process if the plan does not resolve the problem.
8. Document
the changes made to solve the problem.
Thursday, April 10, 2014
MAC Spoofing to the rescue
Great Story from another one of my Students.
In the CISCO class I am currently taking at CTC, with Mr. Cisco as my instructor, I have learned about the many protocols, applications and principles on how data travels over different networks. I have also learned the way people try to gain access to that data and the ways a network can be built to defend against those attacks. I never imagined that I would use the very thing that I have been trained to defend against for my own benefit.
This past weekend my son and I had it all planned to have some good ole father and son time by catching a movie, having dinner and then staying the night in a hotel playing Xbox until we pass out. Movie and dinner was great and it is now video game time. I hook up the Xbox to the TV in the room, turn it on and get ready for the fun to begin. The last step before the festivities kick off is to gain access to the internet. Houston we have a problem. Hotels, much like the local Mcdonalds and Starbucks, has a captive portal that requires you to agree to their terms and conditions and put in some basic information before I could access their internet connection. This requires a web browser that the Xbox does not have. I was stumped but the heartbroken look on my sons face did not allow me to give up. I referred to my training and understanding on how LAN's work and how data travels across the network. This is where MAC spoofing came to save the day.
I realized that I already set up my phone to connect to the hotels internet service and that if I could use that connection on my Xbox all will be good. I dug through the network settings on the Microsoft device and came across alternate MAC settings. Hallelujah. I took the MAC from my already accepted phone and applied it to the Xbox. This time when I tried to connect there were no issues and the online gaming session was able to commence. I was very proud of myself and of course I got my hero points from my son. It was very gratifying being able to use what I have learned for my professional career to solve a problem outside of work. MAC spoofing is the normally the enemy that I guard against, but that day it was my friend.
Excellent job David!
Wednesday, April 9, 2014
dbl
Had to do some QoS today and ran across this dbl
Dynamic Buffer Limiting
Industry’s First Hardware and Flow-Based Congestion Avoidance at Wire Speed
A Cisco innovation, Dynamic Buffer Limiting (DBL) is the first flow-based congestion avoidance quality-of-service (QoS) technique suitable for high-speed hardware implementation. Operating on all ports in the Cisco Catalyst 4500 Series Switch, DBL effectively recognizes and limits numerous misbehaving traffic flows, particularly flows in a network that are unresponsive to explicit congestion feedback such as packet drops (typically UDP-based traffic flows). It is a multiprotocol technique that can examine a flow’s Layer 2/3/4 fields.
DBL provides on-demand Active Queue Management by tracking the queue length for each traffic flow in the switch. When the queue length of a specific flow exceeds its limit, DBL will drop packets or mark the Explicit Congestion Notification (ECN) field in the packet headers, so the flow can be handled appropriately by servers in the unlikely event of network congestion. Unchecked flows—also known as belligerent or non-adaptive flows—use excessive bandwidth, and their consumption of switch buffers results in poor application performance for end users. This misbehaving traffic can also negatively affect well-behaved flows and wreak havoc with QoS.
I know it sounds a little sale pitchy, but still worth looking at.
Friday, April 4, 2014
Shot myself in the foot today.
Actually I pulled the trigger yesterday, but didn't feel it till this morning.
So we have this "new" IP Address Management (IPAM) software (InfoBlox),
which also does DHCP and DNS. Well yesterday, around 11:30am, I was in the IPAM
section creating a new network. I mistyped the Network address, and had to delete it out of IPAM. I must have
highlighted the user’s network which checked its check box without realizing it.
Because this morning I received a bunch
of calls that users at one site could not login this morning. I know DHCP
was the issue; because the user’s IPs were 169.254.x.x/16. I jumped on the switch and used the “sh ip
dhcp snooping binding” to see if the any client had received addresses.
The Total disruption for 10 users was about 30 minutes.
Lessons learned: Slow down with newer/unfamiliar software.
Cisco 4500X w/VSS
Monday, March 24, 2014
THANKS FOR YOUR SERVICE!!!
Thursday, March 20, 2014
Student's Cell Phone
Another one of My student's Cell Phone
One of my CCNA student's Windows Cell Phone with Putty on it, SSHing to his home Cisco 2960
Way to go CW!
Tuesday, March 18, 2014
You can’t handle the truth!
You can’t handle the truth!
Son we live in a world that has networks.
These networks have to be guarded by men with Certifications. Who’s going to do it, You, Your system administrator?
You don't want the truth because deep down in places that you don’t talk about at parties; you want me on that Console,
you need me on that Console!
I have a greater responsibility than you can possibly fathom.
You weep for the users, and you curse the network.
You have that luxury, you have the luxury of not knowing what I know, that Firewalls while tragic, probably saves data. And my existence while grotesque and incomprehensible to you, saves data.
We use words like Subnet, ACL, OSPF, we use these words as the backbone of a life spent defending something, you use them as a punch line.
I have neither the time nor the inclination, to explain myself to a man, who rises and sleeps under the blanket of the very network that I provide, and then questions the manner, in which I provide it.
These networks have to be guarded by men with Certifications. Who’s going to do it, You, Your system administrator?
You don't want the truth because deep down in places that you don’t talk about at parties; you want me on that Console,
you need me on that Console!
I have a greater responsibility than you can possibly fathom.
You weep for the users, and you curse the network.
You have that luxury, you have the luxury of not knowing what I know, that Firewalls while tragic, probably saves data. And my existence while grotesque and incomprehensible to you, saves data.
We use words like Subnet, ACL, OSPF, we use these words as the backbone of a life spent defending something, you use them as a punch line.
I have neither the time nor the inclination, to explain myself to a man, who rises and sleeps under the blanket of the very network that I provide, and then questions the manner, in which I provide it.
I'd rather you just say 'thank you' and go on your way.
Otherwise I suggest you pick up a console cable, and stand a post. Either way, I don't give a damn, what you think you are entitled to!
(T Charping)
Wednesday, March 12, 2014
Monday, February 10, 2014
Simple link upgrade from 1Gb to 10Gb,
Simple link upgrade from 1Gb to 10Gb,
SiteA Sw1 has a 1Gb link to Core
Sw2. We want to upgrade it to a 10Gb link.
Easy enough right? Just copy the
configs off ports G0/1 on the SiteA Sw1, and the Core Sw2
Then paste them on to ports
TenG0/24 on each Sw.
Unplug the fiber at both Sws,
and plug into the new ports.
(Now we did a lot of leg work before the upgrade.
We made sure the fiber was good and would support the TenGig
We installed the new optics)
So when we switched over the
fiber and the ports lit up,
We just know we were good-to-go.
Buy when we tested the link by
shutting down the port to Core Sw1.
…. Was all we got, no eigrp
routes, but I could see Core Sw2 with Show CDP neighbors.
Of course this is not good, and
I starting thinking what could be wrong.
A quick thinking colleague, said “passive-interface” and sure enough
because Site1 had
Access Layer ports so it was
using default passive interface under the EIGRP config.
Core Sw2 was not because it did
not have any Access Layer ports.
No default passive-interface
t0/24 on SiteA Sw1 worked like a charm.
!!!! All day long.
Just goes to show you how not
thinking about the big picture and a little command,
Can sneak up and bite you.Saturday, January 25, 2014
Quiet Weekend at home
Have a quiet Saturday at home this weekend.
Swept some leaves off the front pouch, and had an emergency faucet repair.
Then converted a physical computer to a virtual machine computer using VMWare’s P2V standalone converter. Real easy to install and run on the physical machine, then just point it at the ESXi host, wait about 25 minutes and you have an exact duplicate of the physical computer in your virtual world.
Wednesday, January 22, 2014
Bandwidth Hog at a low bandwidth site with a little help from the "Bandwidth" Command
We have a remote site that has not yet been moved to our fiber transport ring, so it is on an aggregated (3) T1s link to the rest of our network. It is a small site with less than a handful of users, who only use the network to do their time cards, so 4.5Mb is fine for them normally, but the other day we receive notice, from our service provider, that the link has been saturated 24/7 since the beginning of the year. One thing about these aggregated WAN links is that the service provider handles the aggregation and passes you the combine link as an Ethernet link.
So what's the issue?
How do I track down the big talker, if all it takes is 4.5 Mb to saturate the WAN link?
I ran a “sh int | i /255” command to identify any ports that have high rxload or txload rates, BUT, because they are 10/100/1000 ports all their reading were 1/255. So I used the range command to set the bandwidth label on each interface, and voilà , port 1/0/1 was receiving at 106/255 and port 1/0/48 (the up-link to the router) was transmitting at 06/255. I ‘m no rocket scientist, (but I did sleep at a Holiday Inn last night) but the traffic coming in on port 1/0/1 was leaving on port 1/0/48.
I found my big talker.
Monday, January 13, 2014
Nice and “simple” new IDF and equipment install?
It is great when we get a chance to install new equipment in
a new IDF. No old equipment or old configurations to matchup or worry about.
Of course when the equipment is the new (new to me, that is) Cisco
Catalyst 3850 Series Switches, with their Cisco StackWise-480 technology
providing 480 Gbps of stack throughput, Packet Capture capability with the
embedded WireShark, and Stateful Switchover (SSO) resiliency, fun is on
the horizon. The 3850 runs Cisco's IOS-XE Operating System (OS) which does looks and feels a lot like the old familiar IOS, but underneath the CLI, it is a whole
different animal. So when you load a new OS here are some of the different
commands:
·
Upgrading Cisco 3850 Stack IOS-XE :
A.
Copy the *.bin file to active member (you can copy it from a USB drive)
a.
You can use a USB drive,
instead of TFTP if you what to, the USB port is on the front panel (at least on
the 3850-48Ps)
B.
Use the "software install file
flash:cat3k_caa-universalk9.SPA.03.03.xx.SE.150-x.xx.bin new"
(make sure you get the "new" at the end of the line)
a.
**** You can load it from the USB drive if there is enough free space on
the USB drive. “software install file
usbflash0:cat3k_caa-universalk9.SPA.03.03.00.SE.150-1.EZ.bin new”***
C.
After the Switch copies the software, it isn't just one file anymore, it
will ask for a re-boot
D.
********13 minutes while the stack reloads
E.
Use the “software
clean” command to clean un-user file in flash
Once the new OS is loaded and the switch is all good to go,
make sure the switches are stacked if needed. I love the new stacking cables;
they just seem to connect to the switch better the old type.
One of the changes with these new stacks is the way each
stack will have an Active and Standby member to facilitate SSO resiliency which
ensure the management plane is never unreachable. You can assign the Active and
Standby roles to specific switches in your stack by setting the switch’s
priority, the higher the better (max 15). We configure our uplinks in
Ether-Channel groups with one port on the 1st switch in the slack and the other
on the last switch in the stack, so we set the priories on these two switches
in case we lose one the other will still have access to the management plane.
1.
First configure an access-l to mark the interesting traffic
a.
access-list standard My-cap_acl
i.
It doesn’t have
to be standard
b.
permit 198.214.208.24
2.
monitor capture buffer My-cap_buff circular
a.
Creates a Buffer named My-cap_buff
3.
monitor capture buffer My-cap_buff filter access-list My-cap_acl
a.
associates the My-cap_buff to the My-cap_acl
4.
mon cap point ip cef My-cap_point g0/1/0.523 both
a.
Creates a capture point called My-cap_point
5.
mon cap point associate My-cap_point My-cap_buff
a.
Associates the My-cap_point to the My-cap_buff
6.
mon cap point start My-cap_point
a.
This starts the capture
7.
sh mon cap buffer My-cap_point p
a.
This show the capture parameters
8.
mon cap point stop My-cap_point
a.
This stop the capture
9.
And you can copy is off to a tftp server, or read it on the switch.
I am looking forward to use these switches and learning all their little tricks and nuances, and I just heard the 4500-X switches are here, And I might get to install them soon, check back for more fun.
Friday, December 13, 2013
Thursday, October 24, 2013
The next big project
The next big project, kind of.
This next project was just plain fun. The team had to replace a Cisco 1700 series
router with its blazing fast 6Mb (4 bundled T1 lines) up-link, with a Cisco 3750X layer 3 switch with 2 1Gb
links to our fiber transport ring.
After my colleague coordinated the fiber cut and fiber run to
the facility, it was my responsibility to configure the new 3750X. As this new switch would be sitting between
two existing nodes on our fiber transport ring we decided to configure the
uplink to the core on the 3750x as if it were the other node. This would ensure
no changes needed to be done at the core node. On the down link to the other
existing node a new /30 address was configured.
Once the new /30 address was added to the existing node and the EIGRP
network statements were corrected for the new /30 network, and the removal of the origin /30 network,
all links were up.
One small item we needed to pay special attention to was the
fiber transceiver. The distance between nodes is over 16,000 feet so we used CWDM
1590 SFPs in point to point mode.
Anytime we can save 3500.00 dollars a month and provide the
users with better performances, it’s a win, win (just plain fun).
Thursday, August 22, 2013
The Ready Big Project.
The Ready Big Project.
This Project had our team upgrading the networking equipment of one of our main sites, in preparation of our IP Telephony roll out. This site houses one of our two UC clusters, and facilities three of our largest and most visible departments. The project included replacing the Cisco 3560E Layer 3 switch with a Cisco 6509, replacing a Cisco 3550 switch with a Cisco 3850, and adding a Cisco 3750X to an existing 3750X stack.
Due to the 24/7 nature of the site the work had to be scheduled after hours and any loss of connectivity had to be kept to a minimum and a total maintenance window of three hours.
A lot of planning went into ensuring the ports from the 3560E to the IDFs were mapped correctly on the 6509. Most of the optical transceivers were reused, but all of the fiber jumpers had to be replaced with longer lengths. Furthermore the new Ten Gigabit optics in the new 3850 meant an optic in the 6509 had to be changed and of course the jumper connecter type had to be changed as well. Additionally we had to add an optic and jumper for the additional 3750x. Once the 6509 was powered up and the configuration checked we broke the Ten Gigabit transport ring and connected the 6509’s 1st Supervisor’s Ten Gigabit port to the uplink while leaving the ring’s downlink in the 3560E so as we moved the IFD’s links the down time was minimized. To minimize any down time due to the boot-up of the 3850 one of the power cords was ran thought the rack to supply power to the switch so it would be booting while it was being mounted. After we mounted, cabled the stack cables, and powered up the new 3750x to the existing stack, we had to configure the “new” stack with an EtherChannel to the 6509. Once the EtherChannel came up we notice stack port 1 flapping, we reseated the stack cable on port 1, which fixed the issue. As we moved the IDFs off the 3560E and into the 6509, we tested connectivity to the IDFs with the Show CDP and Show Interface Trunk CLI commands.
After all the IDFs were patched to the 6509, all IDFs were tested to ensure clients were receiving DHCP addresses as well as logging into the equipment in the IDFs remotely. Once the 3570x stack came up and the stack port issue was addressed, numerous extensions were called and they were checked in CUCM to ensure they had registered with the correct UC cluster.
We did run into a few issues with this project, mostly with the count and types of fiber jumpers. As all the jumpers needed to be replace and some were Single Mode and others Multi Mode and with the addition of the new optics and jumpers, I lost count of the jumpers by type and did not bring enough of the correct type. This issue was fixed with a quick trip to the warehouse to pick up additional jumpers. Another issue appeared the next day as users started logging on. Users reported a scarcely use application was not working. The app used multicast and we realized when the config had been copied to the 6509 the IP multicast command syntax was different on the 6509 and did not take, and was not caught. The correct command was issued on the 6509 and because the IP PIM commands were on the interfaces the app start working.
Lessons learned on this project are, always have another set of eyes double check the inventory list to ensure counts are correct. Also check command syntax when coping configs between different platforms. This was a really big project with a lot of moving parts. The entire project went well with very little disruption in service, and all work was completed in the maintenance window. Even with the issues, which were resolved quickly, we laid the foundation for the rest of the equipment upgrades and roll out. The team worked well together, and we learned about each other’s strengths.
This Project had our team upgrading the networking equipment of one of our main sites, in preparation of our IP Telephony roll out. This site houses one of our two UC clusters, and facilities three of our largest and most visible departments. The project included replacing the Cisco 3560E Layer 3 switch with a Cisco 6509, replacing a Cisco 3550 switch with a Cisco 3850, and adding a Cisco 3750X to an existing 3750X stack.
Due to the 24/7 nature of the site the work had to be scheduled after hours and any loss of connectivity had to be kept to a minimum and a total maintenance window of three hours.
A lot of planning went into ensuring the ports from the 3560E to the IDFs were mapped correctly on the 6509. Most of the optical transceivers were reused, but all of the fiber jumpers had to be replaced with longer lengths. Furthermore the new Ten Gigabit optics in the new 3850 meant an optic in the 6509 had to be changed and of course the jumper connecter type had to be changed as well. Additionally we had to add an optic and jumper for the additional 3750x. Once the 6509 was powered up and the configuration checked we broke the Ten Gigabit transport ring and connected the 6509’s 1st Supervisor’s Ten Gigabit port to the uplink while leaving the ring’s downlink in the 3560E so as we moved the IFD’s links the down time was minimized. To minimize any down time due to the boot-up of the 3850 one of the power cords was ran thought the rack to supply power to the switch so it would be booting while it was being mounted. After we mounted, cabled the stack cables, and powered up the new 3750x to the existing stack, we had to configure the “new” stack with an EtherChannel to the 6509. Once the EtherChannel came up we notice stack port 1 flapping, we reseated the stack cable on port 1, which fixed the issue. As we moved the IDFs off the 3560E and into the 6509, we tested connectivity to the IDFs with the Show CDP and Show Interface Trunk CLI commands.
After all the IDFs were patched to the 6509, all IDFs were tested to ensure clients were receiving DHCP addresses as well as logging into the equipment in the IDFs remotely. Once the 3570x stack came up and the stack port issue was addressed, numerous extensions were called and they were checked in CUCM to ensure they had registered with the correct UC cluster.
We did run into a few issues with this project, mostly with the count and types of fiber jumpers. As all the jumpers needed to be replace and some were Single Mode and others Multi Mode and with the addition of the new optics and jumpers, I lost count of the jumpers by type and did not bring enough of the correct type. This issue was fixed with a quick trip to the warehouse to pick up additional jumpers. Another issue appeared the next day as users started logging on. Users reported a scarcely use application was not working. The app used multicast and we realized when the config had been copied to the 6509 the IP multicast command syntax was different on the 6509 and did not take, and was not caught. The correct command was issued on the 6509 and because the IP PIM commands were on the interfaces the app start working.
Lessons learned on this project are, always have another set of eyes double check the inventory list to ensure counts are correct. Also check command syntax when coping configs between different platforms. This was a really big project with a lot of moving parts. The entire project went well with very little disruption in service, and all work was completed in the maintenance window. Even with the issues, which were resolved quickly, we laid the foundation for the rest of the equipment upgrades and roll out. The team worked well together, and we learned about each other’s strengths.
Subscribe to:
Posts (Atom)