This week could not be busier so in order to relax this evening another small problem. All towards ultimate goal: hacking systems like a pro ;).
Find a few Cisco servers that could be out target for ... hmm... closer inspection, should we choose learn about their vulnerabilities
How about we try to look at in the company's web page?
Can we find any interesting target IP addresses there?
Raspberry PI to the rescue!
First let's create a working directory and download the index.html page:
--2016-07-13 21:37:53-- http://www.cisco.com/ Resolving www.cisco.com (www.cisco.com)... 18.104.22.168, 2a02:26f0:71:185::90, 2a02:26f0:71:18d::90 Connecting to www.cisco.com (www.cisco.com)|22.214.171.124|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: ‘index.html’ index.html [ <=> ] 66.06K --.-KB/s in 0.005s 2016-07-13 21:37:53 (12.5 MB/s) - ‘index.html’ saved  pi@clu:~/playground $
Okay. Let's search for hyperlinks inside html code. See what we can find.
grep "href=" index.html
WOW! The output is huge! Calling 'cut' for help now.
I am going to match on slash (/) as a delimiting character (-d). Third field (-f3) is going to extract links. Hopefully...
pi@clu:~/playground $ grep "href=" index.html | cut -d "/" -f3
It is beginning to shape up a bit. But the output still has lines of text that are not DNS names. Another grep matching on a dot should clean the output (to match on a dot character it must be escaped with a backslash).
pi@clu:~/playground $ grep "href=" index.html | cut -d "/" -f3 | grep "\."
There is still some text I don't need. A few DNS names are followed by double quote and characters. Another 'cut' will fix it. Also it would be good idea to get rid of the duplicate names and save the whole "discovery" to file 'links.txt'.
pi@clu:~/playground $ grep "href=" index.html | cut -d "/" -f3 | grep "\." | cut -d '"' -f1 | sort -u > links.txt pi@clu:~/playground $
Reading the content...
pi@clu:~/playground $ cat links.txt
Now, just for fun, a bash 'for loop' will convert those into IP addresses. That's thanks to 'host' utility that comes with every Linux.
pi@clu:~/playground $ for url in $(cat links.txt); do host $url | grep "has address" | cut -d " " -f4; done > ip.txt pi@clu:~/playground $ cat ip.txt
Who would think that so many corporate server IPs can be found on the main page?
And that is just a beginning of our fun!
Now, after this quick mental exercise I feel a bit more relaxed. I will be able to face tomorrow. And tough meeting with the suits I am NOT looking forward to.