Posted on December 6, 2013
I have had absolutely zero time to make a blog post since starting my new job until now. I’ve decided that I am going to try and give weekly updates on what I am working on and also how I am progressing both in the real world and my certifications studies. My hours are a lot longer now because I have taken on a lot more responsibilities compared to my last job. Also the environment in 10x bigger then anything I have ever seen before. So all my free time is taken up with trying to bring myself up to speed as well as finding time to study for my CCNP.
Most of my week was filled with meeting other members of my team as well as gathering all necessary networking tools and diagrams. I will be the 9th engineer joining the network services team. There are also three other network teams that handle different aspects of the network. So I would say the total amount of network engineers is somewhere in the range of 30-40. The most engineers I have ever worked with at a previous job was 3 so you can imagine the transition this is going to be for me.
Everyone on my team is extremely bright and talented in many areas of networking. Many also are very skilled at coding. We use a lot of in house applications that were developed by both current and previous engineers. This got me to thinking that I should probably brush up on my computer science. I had a bunch of computer science classes in college however I absolutely hated it. I am hoping that this time things may be different since I will be self teaching it to myself. I will most likely try and learn some Python in my downtime at work. It is definitely not a top priority of mine, however I feel with SDN on the horizon it can’t hurt to learn.
Besides that I am really excited to work in this environment. There is constant changes being made to the network so it will definitely be a great learning experience for me.
Posted on October 23, 2013
For the past two weeks me and another engineer have been coming in an hour earlier everyday to move our access layer switches to the Nexus 5548s. It was quite the tedious process but at the same time it was a great experience. This move involved migrating the access switches to 10Gb as well as utilizing Etherchannels up to the 5548s. The 5548s also had to be configured with etherchannels/vPCs down to the access layer. We did 1-2 IDF closets a day so the process did take quite some time. However this morning we finished it all up
This past weekend we had a maintenance window from 1am to 7am. During this time frame we had to migrate 7 blade centers to the Nexus 7010. There were also a few other tasks that we needed to do such as moving over some management cables and a few “not so important” servers. Again etherchannels and vPCs needed to be used between the Nexus and Bladecenters. All in all it was a pretty smooth transition. The last thing we need to do now is remove some left over items that are still hanging off the 6509s. After that the Nexus migration project will come to an end
Posted on October 3, 2013
Phase 2 of the network migration involved moving all the floor switches, which are currently connected to the 6509s, onto our 5548 distribution layer switches. One of our buildings is populated with 3750s and each IDF closet has two 1Gb uplinks up to the core. Our other building is populated with 2950s also with 1Gb uplinks to the core. Currently no floor switches are using etherchannels. Which means that 1 port from each IDF is in an STP blocking state. Obviously this is not ideal. For this phase we will also be migrating all the 3750s to use 10Gb uplinks as well as utilizing port channels between 3750 stacks. The 2950s unfortunately will have to stay on 1Gb uplinks however they will also utilize portchannels.
Before I go on I just wanted to let everyone know that this phase was completed yesterday night. We ran into some problems with the configuration on the Nexus 5ks. One in particular was a vlan was missing from the vlan database. The strange part about this was that the vlan showed in the vlan allowed list on the trunk links. You would think that some sort of error would have showed on the log. Another issue that we had was that some vlans were missing on the peer-link vlan allowed list on the 5ks. Besides those two issues the night went rather smoothly.
To be honest though, the most difficult part about this whole night was making sure that the fiber cables were properly flipped. As you know fiber has a Tx and Rx so the run between the access switch and the 5k has to match up. This can get confusing though when you through fiber patch panels into the mix. We had one person at the 5ks and one person going to each floor switch to swap cables and verify configuration.
All in all it was a successful night. As of right now we will not be moving anything else for about 2 weeks. At that point we want to completely remove the 6509s from our network as well as moving the rest of the blade centers to the Nexus.
Posted on September 30, 2013
So after a long and at times stressful night, we finally finished all the tasks we had planned. The second I stepped into the office at 10pm it was all work up until about 8am. I never experienced time fly by as fast as it did. It is almost surreal to think about. So here is a breakdown of my night from start to finish.
Upon getting in I immediately had to start some preliminary work on the floor switches. Basically logging into each one and setting them to vtp transparent mode and saving the configs. One of our older buildings was still using the vtp client/server model so we figured this would probably be the best time to change everything to transparent. After that task was completed I had to run one of our backup internet lines to a 3750 (which hung off the Nexus). At the time the backup line was attached to the 6509 so running the cable to the Nexus cabinet wasn’t much of an issue. It’s about 12am at this point and this is where the fun begins!
So in Phase 1 we are still leaving the 6509s in our network, the only change we are making is that instead of the 6509s running layer 3 they will be made layer 2, and the layer 3 will go to the Nexus. So in order to introduce the Nexus to the network we needed to connect the Nexus to the 6509s. Basically we did this by creating an 8Gb port channel between the Nexus and the 6509. We also increased the connection between our two 6509s from 2Gb to 4Gb. After that it was smooth sailing. We shutdown vlans on the 6509s one at a time and brought them up on the Nexus.
Now that layer 3 has been moved completely to the Nexus it was time to migrate a blade server and a single floor switch. We have a a stack of 3750s in the Nexus rack that is dedicated for blade center aggregation. The blade center has 2 switch cards on the back of it with 4 ports each. So on the 3750s I created 2 port channels going to each switch module on the blade server. Super smooth and had no issues. Well there was one issue, a port on the switch module had gone into err-disabled. A quick shut no shut quickly fixed the port channel link. Next came the floor switch. The floor switch currently was running at 1Gb and we needed to migrate it to 10Gb. Here I changed the modules on the 3750 floor switch to be 10Gb capable and copied the trunk link configs to the 10Gb interfaces. I then created a port channel from both interfaces. Again no issues
So that is basically a breakdown of the entire night. I am going to spare you a lot of the boring details that happened throughout the night. Reason being is that I don’t want to relive them again lol. The next step in our network migration now is finish migrating the rest of our blade centers as well as moving the rest of our floor switches.
Posted on September 25, 2013
The time has finally come to implement the Nexus switches into our production network! I have mixed emotions of both excitement and nervousness because this is the first time I have ever done a change over of this magnitude. However what excites me the most is the experience I am going to gain from doing such a large migration. To help the our team stay organized we have broken down the migration into three phases. I will only talk about phase 1 for now.
Phase 1 entails introducing the Nexus into our core network and moving layer 3 routing from our 6509′s to it. Basically what will happen is we will shutdown all the vlan’s on the 6509′s and bring them up on the Nexus. Besides just moving layer 3 to the Nexus we also plan on moving a single floor switch to a 5548. Reason being is that we want to take this migration as slowly and methodically as possible. Also if the switch move goes smoothly we may even move over a couple more. One other task that we have planned for that night is moving a highly populated blade center to the Nexus. This will help take a load off the 6509′s and also allow us to begin on migrating the rest of our blade centers (if the move is successful of course) .
Oh and best of all? The maintenance window for these changes is 10pm to 7am!!! I have not pulled an all nighter like this in a long time but I am so ready for it. In the next couple days I am going to try and change around my sleeping pattern so that way when it comes Saturday I won’t be a zombie. I’m sure the Redbull and Starbucks will also help I will report back on either Monday or Tuesday on how the night went.
Posted on August 22, 2013
So in light of me wanting to expand my knowledge of data center technologies I have decided to go ahead and purchase a book published by Cisco Press titled “Data Center Virtualization Fundamentals: Understanding Techniques and Designs for Highly Efficient Data Centers with Cisco Nexus, UCS, MDS, and Beyond”. Quite the long book title if I do say so myself. The title alone stood out to me because it mentions 3 technologies that I have been immersed in since starting my new job. Nexus, UCS, and MDS are advanced technologies that I hardly even knew anything about 3 months ago and now I am expected to bring myself up to speed ASAP. Well I don’t have to bring myself up to speed, I can just sit back and enjoy the ride however I am not that kind of person. I want to understand what I am working with as well as be able to explain it to someone else. So basically I looked through the books chapters and I saw that it would be very helpful so I went ahead and placed an order for it. What made it more appealing for me is that it is a recommended reading for CCNA/CCNP/CCIE Data Center candidates. As we know there aren’t many published training materials for that certification track yet.
I did buy a book about two months ago published by Sybex titled “CCNA Data Center Introducing Cisco Data Center Networking 640-911″. I skimmed through this book rather quickly however it did not go nearly as in-depth as I needed it too. I could probably even pass the first part of the CCNA:DC at anytime. Seemed like a lot of review of CCNA topics with some Nexus stuff thrown in.
Technologies that I am currently learning for our new data center include Nexus 7010, 5548UP; UCS Blade Servers as well as 6248UP Fabric Interconnects; MDS 9513. Hopefully this book will give me a much greater understanding of all the above.
Posted on August 14, 2013
The past two weeks of work have been pretty busy; filled with Nexus labs, everyday miscellaneous tasks, and now learning Cisco UCS. Basically, at least the way I understand it, the Unified Computing System is Cisco’s way of bringing your network, storage, and servers of your data center together. I might add that it does an awesome job at that too!
In our rack we have two 6248UPs that connect down to the UCS blade server via FCOE (4 10Gb links to each 6248UP for a total of 8 links). We then have the 6248UP connecting up to the Nexus 7010 via fiber Ethernet (4 10GB links, 2 from each 6248UP). Lastly, we have 4 FC links going from each 6248UP to our MDS SAN. Pheww!! you do not want me to tell you how long it took me to wrap my head around all this lol. The 6248UP is an awesome switch that can speak ethernet, fiber channel, and fiber channel over ethernet which makes it very useful in data center environments. Oh and one more thing, the 6248UPs are also connected to each other through their L1 and L2 ports. These ports need to be connected in order to have the two 6248s run in cluster mode. Also management ports were connected. Okay that’s it I’m done lol. I am posting a picture below of the basic topology of what I described above. The only difference is that our equipment and links are slightly better then what the topology shows.
So yesterday I decided to go ahead and start on the initial configurations of getting this all connected. First I cabled everything up as described above. Next I consoled into the 6248UPs and configured them in cluster mode. The whole setup is very intuitive. All I had to do was enter some basic IP address information on each of the switches and say that I want to run them in cluster mode. I also had to enter a Virtual IP that the switches would share. After all that was done I could then open up my web browser and browse to the virtual IP address. This took me to the UCS manager interface. All I can say is this interface is awesome! it gives you information on the entire UCS system on just about anything you would ever want to know. You can also configure the ports by right clicking them. The only configuration I did was changed the 8 links we have going down to the UCS to server ports. Hopefully today I will have time to go back in and play around a little more. Oh and I forgot to mention the UCS manager even builds you a graphical network topology of what it detects connected to it!
So I have still been learning a ton about Nexus these past few weeks and I am starting to feel pretty comfortable at the NX-OS command line. It’s pretty similar to IOS which is good. One thing that took some getting used to for me was enabling specific features. For example, I spent about 15 minutes scratching my head on why I couldn’t create an SVI…and then I facepalmed when I realized I had to enable the interface-vlan feature. I won’t be making that mistake again
My configurations have been pretty basic and currently I am still experimenting with vPC’s. Below will be a picture of my current topology. It’s crazy to think how much I have learned in such a short time period and I am loving every moment of it! You really don’t realize how much work goes into building a data center until you actually have to from the ground up. I have been really enjoying learning about data center technologies and I really think I am going to make it my niche in the coming years.
Well to top off my CCNP Switch studies I have also been learning about Cisco Nexus. One good thing I have found about this is that a lot of topics from my CNCP Switch are reinforced throughout my Nexus studies. I am not really going for any type of data center certification just yet, however my current company is implementing Cisco Nexus so I kind of have to familiarize myself with all things Nexus. I don’t want to get left behind while all the other engineers understand Nexus and I am always having to ask them questions about it. Basically I am just going through the CCIE Data Center Nexus INE videos to get some kind of grasp for the technology. I feel like I am trying to tackle something that is so far beyond my skill set, but this is something I have to get under my belt.
Currently in our test lab we have 2x 5548s. 2x 6248s, 1x UCS, and 1x Nexus 7010. Another rack is being put together and will basically have the same hardware installed. I really enjoy Data Center technologies and I think it is something that I want to become my main focus in the future. I am not going to lie though, I feel even MORE overwhelmed now. I always take a step back though and think to myself how lucky I am to have been given this opportunity, and that within a years time I will have progressed so much. Heck, just 2 years ago I was taking my A+!
First time consoled into a Nexus!!!
So yesterday was a very eventful day. We finally got around to racking the Nexus 7010. The process took longer then expected however at the end of the day it got done. You don’t realize how much thought and effort goes into just racking a unit. First we had to put the APC rack together, well it was already put together but we had to adjust the rack posts so they would sit how we like. Next we had to unpackage the Nexus and get it into the rack somehow. We used a device called a server lift. Its basically a smaller version of a fork life that was designed to be used in data centers. Ill spare you the details on how we got the Nexus on the lift but it was interesting haha. Once we finally got the Nexus in the rack it came time to screw it in, which in itself took a solid 30 minutes because putting in cage nuts is not fun. Today we continue filling up the new rack, I believe we are putting in the Cisco 5548 and the Cisco UCS in next.
below is a picture of the Nexus after we racked it.