RANCID and IOS 15.2 – blank config and how to work around newer file privileges

In or around IOS 15.2 apparently a privilege structural change was made that breaks non-priv-15 users from being able to copy the running config, useful for RANCID and other tools.  And either the RANCID/Oxidized community simply uses privilege 15 users in their configs, which I refuse to do on priciple, or my google-fu is poor because I have not found this info in any explicit form.

In any case, the symptom of the typical config allowing to download a running config without having level-15 privileges on Cisco IOS has always been documented as:

username rancidbackup privilege 10 secret [md5-pass]
privilege exec level 10 show running-config view full
privilege exec level 10 show running-config view
privilege exec level 10 show running-config
privilege exec all level 1 show


the above in IOS <= 15.1 was always enough to allow user “rancidbackup” to issue a “show running-config view full” and the CLI would output the current configuration.  Under IOS 15.2, behaviour appears to have changed.  Some folks receive a “permission denied” error while others, such as my experience, a simple empty config would be output.  As an example:

ssh -l rancidbackup


Router#show running-config view full
Router#show running-config

This annoying behaviour can be worked around in two different ways.

The first, is to cave and allow the rancid backup user to obtain level-15 privileges upon login.  The academic risk of compromise by this admin-level user can be mitigated by limiting the IP addresses used for login via ACL:  example:

username rancidbackup access-class 99 privilege 15 secret [md5-pass]
access-list 99 remark RANCID-ONLY
access-list 99 permit host any

By adding access-class to the username definition, it is possible to avoid this user being used from other sources.  This might even be recommendable for any scenario where scripting/tools are used for access to the device.  However, this method still requires that user to have admin level-15 privileges.


I was pointed to https://supportforums.cisco.com/discussion/11691446/error-opening-nvramstartup-config-permission-denied as a possible second solution.  Apparently there is a mecanism to allow file access (which includes running-config, apparently…), with the command “file privilege [level]”

In our case, by issuing “file privilege 10”, we are able to see via “show running-config view full” (and not just show running-config) the info we seek to backup with our tool, in this case RANCID (as you can see, “show running config” is useless but “show running-config view full” outputs what we need):


Router#show running-config 

Building configuration...
Current configuration : 261 bytes
! Last configuration change at 22:04:36 EDT Mon May 30 2016 by user
! NVRAM config last updated at 13:15:46 EDT Thu May 26 2016 by user
! NVRAM config last updated at 13:15:46 EDT Thu May 26 2016 by user

Router#show running-config view full 
Building configuration...
Current configuration : 90474 bytes
! Last configuration change at 22:04:36 EDT Mon May 30 2016 by user
! NVRAM config last updated at 13:15:46 EDT Thu May 26 2016 by user
! NVRAM config last updated at 13:15:46 EDT Thu May 26 2016 by user
upgrade fpd auto
version 15.2
.... etc etc


I am still researching secondary implications from setting “file privilege” to anything other than level-15.  I haven’t found a way, using a privilege 10 user, to modify/delete the files so use at your own risk (!).  But combining the two above mecanisms I believe provides a sufficient means to allow non-privileged, read-only access to the configuration for use with config management and other scripting tools.


Damn you, ICMP redirect!!! (or rather, how to flush a cached ICMP redirect under CentOS7/Linux)

Been a while since I added anything new here.  Been busy trying to keep my head above water I guess.  In any case, I came across a situation during $dayjob$ where I had to seperate two networks that were sharing the same VLAN to two distinct VLANS since they were actually always supposed to be seperate, and are geographically distinct as well.  The router configuration was as follows:

interface Vlan6
 ip address secondary
 ip address
 no ip redirects
 no ip proxy-arp

Networks and were essentially on a common VLAN.

Anyway, the task seemed simple, leave on VLAN6 and create VLAN5 on this router to add a routed interface between and which was being moved to another device.

 interface Vlan5
 ip address

Now, traffic from would have to transit via, being routed by two different routers.   Hosts in would have to transit via is another interface on the new router) and to get to  Simple, everyday stuff.  Right?


Well for a reason that at first escaped me, the one and only host in that had communications with could ping every other host in the network *except* the two I needed it to communicate with, and until I added the routed interfaces, was working perfectly.  I was confounded until I tried a traceroute from one of the hosts in back to

[root@host1 ~]# traceroute
traceroute to (, 30 hops max, 60 byte packets
 1  (  849.491 ms !H  849.454 ms !H  849.426 ms !H

Now why would I get a HOST_UNREACHABLE?!?!?  From MYSELF!!!!( is host1) Here is my routing table

[root@host1 ~]#  ip -4 route
default via dev eth0 dev eth0  proto kernel  scope link  src

Seems normal (?)

Other traceroutes towards hosts were working:

[root@host1 ~]# traceroute
 traceroute to (, 30 hops max, 60 byte packets
 1 (  0.225 ms  0.192 ms  0.180 ms
 2 (  5.591 ms
 3 ( 6.524 ms

My gateway for is, and the above makes sense.  It is only then I realized the host had cached an ICMP_REDIRECT for, and I checked this with the ip route command:

[root@host1 ~]# ip route get via dev eth0  src
    cache <redirected>

bingo.  Cached, ICMP_REDIRECT from, which no longer exists.  I didn’t bother to check how long the timeout is for these cached entries, however it was longer than I would have expected, especially since I troubleshot this for more than 15 minutes (*cough* *cough*).

In any case, I learned a new command to zap these, and thankfully, stuff started working again:

root@host1 ~]# ip route flush cache

And with that, my troubles were gone

[root@host1 ~]# ip route get via dev eth0  src

from my reading of various Googles, the gc_timeout is what defines the actual timeout:

[root@host1~ proc]# cat ./sys/net/ipv4/route/gc_timeout

I can safely say my troubleshooting went way past this timeout, assuming it is seconds and not minutes, and therefore there is some wonkyness beyond I need to check.  In any case, knowing how to clear the cache is evidently useful as well!

IPv6 router solicitation, Link-Local gateway, Ubuntu and a bit of WTF…

I have been using RHEL/CentOS lineage Linuces for a good while  now, and I believe the strength and distro mindset is their dedication to package dependencies.  Some packages or software that require bleeding edge or even not so recently updated/released libraries/CPAN perl modules etc don’t behave well or won’t install/compile without breaking those dependencies without installing source packages.  This paradoxally is the weakness of RHEL/CentOS, their dedication to dependencies runaway  They occasionally are a bit behind when it comes to the latest and greatest, and you cat get handcuffed if you want to remain within the confines of the dependancy sandbox.


In any case I wanted to toy with some software that clearly seems written for Ubuntu (first tell-tale sign is that you need to add a bunch of user-contributed repositories so you can install it….)  This software requires IPv6 connectivity so I decided to trash a BIND virtual server I was using and replace it with Ubuntu 14.04 from the .ISO.  I am not totally foreign to Ubuntu as I use Ubuntu desktop on a couple laptops, however in a server environment I have never bothered to consider it out of laziness, mostly.  Here are a few observations, some of which I am still trying to wrap my head around.


BIND startup

My first challenge was getting BIND to start correctly.  I usually use “listen-on-v6” to specify what interfaces I allow BIND to bind to, and was seeing this incredibly annoying message on bootup and BIND was not binding to any IPv6 addresses, defeating the purpose of running BIND on this server in the first place:

yet 2001:db8:9::37/64 is statically configured in my interface config (!!):


apparently, Ubuntu is in such a hurry to boot that it starts daemons before the interface init is even finished (!)  Major-wtf.  So it would seem that it is prudent (and necessary?) to put the IPv6 definition of eth0 inet6 *before* IPv4.  Guess that is good to know doh  Once thats done, BIND starts correctly and we can move on.

IPv6 default gateway gong-show

My next hurdle was not far away.  Even if I manually set the gateway, for some reason Ubuntu seems to feel it necessary to send a router solicitation… My network has regular router advertisements disabled, just because I don’t want SLAAC to work in that particular test environment.  So even if I statically configure Ubuntu with a prefix, mask and gateway, it still seems to feel a need to go exploring for routers and sends out an ICMPv6 router solicitation…so when I check the routing table:

now wait a minute!  My network does not send periodic RAs, I have statically defined my gateway, and Ubuntu overrides whatever I defined with auto-learned crap??  surprised  After 500 seconds, the RS-learned gateways disappear and whatever I defined as default then remains, and sometimes no gateway at all…..

Link-Local Gateway

Remember I mentioned above the bit about putting the IPv6 config before IPv4?  My network uses HSRP as first-hop redundancy, and IOS didn’t allow for global addressing (and some supported IOS trains do not have this feature) until “recently”.  In any case, I want to use a link-local address as a gateway.  For some reason, if the IPv6 interface in Ubuntu is defined *after* the IPv4 address, the LL gateway sometimes is ignored (what?runaway)  So not only do I get possibly bogus gateways via ICMPv6 router solicitation, I might end up with no gateway at all once those RAs lifetime is up.  not good.

The solution is to disable router-advertisement learning in /etc/sysctl.conf by adding the following:

and contrary to RHEL/CentOS (and pretty much any environment where a link-local is used to route something that I have seen), it is not necessary to specify the interface – I suppose Ubuntu is doing some logic there:



So we note the absence of %device.  I would have to test if with multiple NICs this still works.  Hopefully it does pc


Network restart broken


My last adventure was trying to restart networking in some controlled way after making changes.  I believed I had broken my install when I got the following as a result:

After some googling, apparently this is known.   So shucks, another thing that will hopefully get fixed wassat   At the rate Ubuntu seems to update packages, I suppose it might not be a long wait pc




New domain….

So apparently, its possible to have a .NINJA tld .  So I haz one now 🙂  And this site is renamed!

When I registered it I got a popup telling me there was a trademark on the “COMMANDLINE”.  As this site or any activity surrounding it is neither my livelyhood, nor is it commercial, I am going to risk registering it.  I can’t imagine I would be sued for a blog using this domain name that nobody reads and that doesn’t generate any $$$.    Until that happens I will be enjoying http://commandline.ninja/ http://www.freewebspace.net/forums/images/smilies/emoticons//smiley_ninja.gif

dot ninja logo



Samsung Galaxy S4 – IPv6 borked….

I have been running around trying to figure out why my Samsung Galaxy S4 (Android 4.3) seems to lose its IPv6 prefix when it has been sleeping for a while. At first I blamed my edge CPE device, as I had rebuilt a OpenWRT recently and switched from WIDE-DHCPv6+RADVD to 6relayd.  And I had never noticed if the previous implementation had this issue or not.

Symptoms:  If I leave the device idle for a certain amount of time (which apparently would be longer than the preferred life of the prefix), when I open the browser I note that v6-sites are not reachable until my router sends an unsolicited RA.   I installed some free apps such as MyIPv6 and Network Info ii and although all seemed normal during phone use, after a certain time the prefixes would be non-functional, and would even not appear in these apps as existing, until the next RA sent by the router and all would resume correctly.

As well, if for whatever reason the prefix received by prefix delegation changes, an idle device (read: screen off….) will only learn of the new prefix once the route sends an RA containing that new prefix.


Out of desperation I started googling and apparently I am not alone:



I especially liked the comment from the Samsung developers:

OK, well I tested this, as did the reporter of this issue.  Simply locking the phone (turning off the screen), with an ICMP ping to the IPv4 and IPv6 addresses respectively.  The IPv6 address of my phone ceases to reply to ICMP echo requests when I turn off the screen (as expected) –  YET IPv4 REQUESTS ARE NOT IGNORED and replies are sent by the GS4!   So what gives, Samsung devops?  This is with or without the power saving features enabled.  I don’t believe this cockamamie notion about  running down batteries – its either laziness, incompetence or an outright dismissal of a bug.

And frankly, I wouldn’t really care if ICMP echo/echo-reply data was discarded, however ND, RA and other critical v6 traffic shouldn’t be.  Or, at the very least, send a solicited RA or reset the IPv6 stack entirely, because when the phone wakes from sleep or is unlocked, the IPv6 stack is broken up until a router sends an RA.  And in my case, 6relayd seems to send one every 7 minutes or so, with a randomized +/- interval of a few seconds.  And some networks I have connected to send RAs every hour, not every few seconds, since IPv6 protocols allow for address assignment to occur without RAs….So it could be a long wait!  And even if a network sends an unsolicited-RA every 30 seconds, that is 30 seconds where your connectivity is v4-only when you benefit from a dual-stacked environment.  Not very modern-ish behaviour by a so-called industry leader!

I hate to admit it, but I tested iPhone4, 4S, 5 and iPad mini as well as Windows XP/7, and they all behave correctly when woken from whatever state they were in post-prefix-expiry.  I will be borrowing some other Android devices as I can get my hands on them to compare results.  But as of now I don’t have a very high regard for Samsung’s IPv6 implementation!


Samsung coders need to get a clue!  “End users can connect to networks continually by IPv4” is not an acceptable answer.  As IPv6 grows in popularity and necessity, that is a pretty bone-headed opinion.  Need I point out that at the time of this writing, that forum post is less than 6 months old!


Cisco IPv6 flow export with “Flexible Netflow”

Sometime between IOS 12.4 and 15.x, IPv6 flow export configuration changed.  It used to be quite simple:

Pre-flexible-netflow configuration:

voilà….IPv4 and IPv6 flows exported to your favorite collector (mine being the wonderful and always useful  NFDUMP/NFSEN).

Somebody at Cisco obviously found this too easy, so it is now required to re-engineer this functionality into “Flexible Netflow”.  It does allow you, via the creation of a flow record, customize your own format, which is beyond this simple blog post and likely my current understanding as well ;).

For my purposes, I simply want to export IPv6 flows to my collector for business as usual.  This is done by defining a flow exporter and flow monitor, attaching the exporter to the monitor, in the gobal configuration, and applying it to the interface somewhat as before:

Flexible-netflow configuration:

Presto. The original-output defined in the flow monitor specifies using the predefined (?legacy?) record instead of specifying a flow record with user-defined parameters.

And as a side note, if you haven’t played with NFDUMP/NFSEN, you really should give it a try, its very useful as a tool for traffic analysis/DDoS/post-mortems.


CentOS policy routing – why yes, it can be done!

Over the years working in LAN networking there are several situations that dictate a host/server have multiple IP addresses on the same or different, physical or logical, devices.  For instance, connecting to a private management-only network/vlan, offering connectivity to a inside network on a private NIC, etc etc.


This scenario often causes two somewhat annoying behaviours:


1) the return traffic often is sourced from the “primary” IP address of the host/server, most often the one that is on the subnet associated with the default gateway

2) a surprising number of alleged “network administrators” seem to think having multiple gateways (one for each IP address of course )  is a good idea.  Well, over the years I have come across this situation and, in every case, this has obviously  NEVER WORKED.  


Situation #2 can only be fixed by not mind numbingly entering multiple gateways without restraint.  As for situation #1, RedHat/CentOS and derivatives support via iproute2 the ability to make traffic rules, ensuring that IP traffic is sourced by a particular IP in cases you can define.  Great!  Multiple NIC or logical interface routing on Linux is possible! (and yes, it involves having multiple gateways, but not stupidly and blindly adding them in the routing table….)

It is very simple to implement and involves the steps below.  As an example, lets assume we created a management VLAN (VLAN4) and want to add an logical interface on a server in that VLAN to access it internally.  We will be using as an inside network.


Step 1: Create a VLAN interface

This creates the necessary interface on VLAN4 from primary physical interface eth0:

vi /etc/sysconfig/network-scripts/ifcfg-eth0.4


Step 2: Create a iproute2 table for that management network

Edit /etc/iproute2/rt_tables to add a new entry and give it a arbitrary (unused 😉 ) name:

vi /etc/iproute2/rt_tables

# reserved values
255     local
254     main
253     default
0       unspec
# local
#1      inr.ruhep
200     MGMT

Note that between 200 and MGMT is a tab character.


Step 3: Create a default route for that network

vi /etc/sysconfig/network-scripts/route-eth0.4

default table MGMT via

this creates a default route for the MGMT/ network to which is your inside routing intelligence.

Step 4: Create routing rule for

To ensure that traffic received on is utilizing the MGMT network only as a source address, a rule must be defined to enable this:

vi /etc/sysconfig/network-scripts/rule-eth0.4

from table MGMT


and thats it!  restart your network:

/etc/rc.d/init.d/network restart


Using iproute2 commands, we can check that what we did works (as well as using wireshark 😉 )


[root@server network-scripts]# ip rule show
0:      from all lookup local
32765:  from lookup MGMT
32766:  from all lookup main
32767:  from all lookup default
[root@server network-scripts]# ip route show dev eth0.4  proto kernel  scope link  src dev eth0  proto kernel  scope link  src dev eth0  scope link  metric 1002 dev eth0.4  scope link  metric 1006
default via dev eth0


Note: this also would work with a second physical interface, for instance to utilize a second NIC card instead of a VLAN logical interface, substitute all use of eth0.4 for eth1.


BIND DNS64 and views

I am starting to experiment with IPv6-only networks and require the use of a DNS64 service to be used in conjunction with NAT64.  Luckily, BIND 9.8.x offers this capability.  However, I don’t want my DNS server to reply with a quad-A record for every query now that I have enabled DNS64, since this DNS server is used by non-v6 clients, as well as dual-stack clients that I do not want to send them searching for 64:ff9b::/96. :S  I therefore added a secondary IPv6 address on the DNS server for my v6-only clients to use specifically for DNS64, and put it in its own view.  In the config below, the BIND DNS64 service will utilize 2001:db8::64 🙂


acl rfc1918 { 10/8; 192.168/16; 172.16/12; };
options {
        listen-on port 53 {
        listen-on-v6 port 53 {
view "external" {
match-destinations { 2001:db8::10; ::1;;; };

}; //end view external
view "dns64" {
    match-destinations { 2001:db8::64; };
    dns64       64:ff9b::/96 {
        clients { any; };
        mapped { !rfc1918; any; };
        exclude { 64:ff9b::/96; ::ffff:0000:0000/96; };
        suffix ::;
}; //end view dns64


the corresponding digs show the difference between

dig @2001:db8::64 +short -t AAAA www.cnn.com
dig @2001:2b8::10 +short -t AAAA www.cnn.com

Looks like I am ready to start working on learning how to use TAYGA

CentOS 6.3 released – features….and bugs :)

CentOS logo


Well folks its finally here ; http://lists.centos.org/pipermail/centos-announce/2012-July/018706.html

Among the changes, imitating its parent RedHat6.3, OpenOffice is replaced by libreoffice, I have yet to try it out but apparently has a bit more community support and can read .docx files.   They are also deprecating matahari, a management API I know little about, in favor of another suite (CIM).  RedHat even recommends matahari removal, which is fine by me since I never bothered to learn it 🙂

Bind9.7 is also replaced by Bind9.8 (yea!), a rare move by RedHat in my memory since they dislike changing version numbers in the lifecycle of each distribution.

Another notable addition to CentOS6.3 is the tools to convert physical to virtual machines for use with KVM, virt-p2v and virt-v2v (for migrationg a virtual-to-virtual installation).  Unlike RedHat6.3 that ships an .ISO image seperately to boot from in order to use these tools, CentOS includes that .ISO in the .rpm.   I look forward to trying it out.

I have already upgraded a number of guests (my KVM host however is still running 6.2, I have not worked up the courage to give it a go yet, it should go well but I can’t be bothered with glitches right about now), and all seems to work smoothly; I did come across one issue however, if you utilize IPv6 resolvers, a new(?) bug in libresolv causes a segmentation fault crash in applications that make a particular call to it;  sendmail, freshclam/clamav, emacs, openvpn, chrony, postfix.  Here is the Redhat bugzilla entry, as well as the CentOS entry.

Even if only one of your resolvers is an IPv6 address, affected software will crash;


freshclam[6598]: segfault at 1 ip 00007f9be8b37596 sp 00007fff9ffac0b0
 error 6 in libresolv-2.12.so[7f9be8b2b000+16000]

sendmail[7374]: segfault at 1 ip 00007f95d7b73596 sp 00007fffa93295a0 
error 6 in libresolv-2.12.so[7f95d7b67000+16000]


I look forward to the fix.  In the meantime I will get to tinker with P2V/V2V tools.






PHP updated – CVE-2012-1823 / CVE-2012-2311


A bug in PHP 5 was released, somewhat accidentally (apparently somebody made a reddit post public inadvertently…why people would use these sites for sensitive info is beyond me…anyhoo), and is finally patched by RedHat and derivatives, CentOS, ScientificLinux –


Ubuntu has also released a patch –


RHEL5&6 would be no longer vulerable to either CVE-2012-1823 or its re-incarnation CVE-2012-2311, as the bug was not correctly patched the first time around.   RedHat claims their fix is complete, I cannot vouch for Ubuntu so don’t blame me if you have to patch it again later.


The vulnerability itself was quite old, sneaking into the code 8 years ago and lying undiscovered until recently.  It relies on the use of php-cgi (running PHP as a cgi forked process, not in the more mainstream mod_php mechanism).  One of the many consequences of this bug was source code exposure (via ?-s), and many PHP sites having database username/password information contained in the PHP code, this vulnerability could and will compromise sites where this is the case that remain unpatched.  This is one of the many reasons it is a better idea to use PHP include functionality to provide database/user/password/security connnection info to PHP, and have that included file outside of the http webroot in the first place.  Other possible exploits would be to execute code/upload files on the remote filesystem….scary, nasty stuff!!!!

Even the allmighty Facebook, set for an IPO this week, was vulnerable to this (they were running as a cgi, apparently!!)

The IPO could have been something of a bust had Facebook been hacked a week before it hopes to raise 95$ billion USD.  Who would have seen that one coming?!?!?!?!