Monitoring CISCO ACLs with ELK stack 5.4 (part1)

Monitoring CISCO ACLs with ELK stack 5.4 (part1)

Hi again world!
After finishing my exams, I've got finally time these last days for playing with a pair of things I got pending for some time...

By one side, having passed almost two years since I did my first labs with ELK stack, and later successfully deploying it at the company where I work, being 1.X then, I had to check out what's new at the current 5.X versions.
Although there are several little issues with our current deployment, in general, we are very satisfied with the results... And I have learnt how to create filters to create nice Kibana 3 dashboards about almost anything we got in production....ELK is just amazing!

On the other side, I have wondered often what kind of hostile activity do I have on my CISCO labs stuff, how often it comes, from where, and so on...
Knowing ELK excels in doing precissely that (we use it to monitor pfSense, suricata or iptables logs, for example) and how nice is having a Kibana dashboard showing all that info in place, I wondered about getting my CISCO ISR/PIX/ASA ACLs monitored and graphed at home too!!!

So, after getting it up and running, I though it would be nice to share my notes in the form of a blog post... so here we are!
This time but, due to the length of the whole thing, I will split the thing in parts, so it will be easier for everyone.

Before starting...

If you have somehow landed here and are still reading, chances are that you're an experienced ELK user... If this is the case, this may be the time to skip this article to the next one, or even forget the whole series:
I'm not an ELK expert at all, but just an experienced user on some parts of the ELK stack.

I'm using Debian stable on servers... and at the time of this writing this means still Debian Jessie.
Current ELK is 5.4, and it needs Java 8, so, this means that we need to install ORACLE's Sun Java 8 JRE.

So, In this first article I'm going to just start introducing ELK, or to be more precise, to explain my 'two cents' about something as complex as ELK can be.
I'll cover how to prepare the server to install the basic ELK pieces: ORACLE Java 8 and the Elasticsearch Debian repository.

First things first... about ELK stack

ELK stack is, you guess, a stack of software, different programs, that together, allows you to create an awesome system, that can range from a single server to a huge cluster, capable of ingesting service logs, transform them to useful data, store that data, and provide the tools to query/search it and to generate visualizations, graphs and dashboards.

Unlinke real-time monitoring services (which are also necessary!) such as Munin, Nagios, Cacty or the like, ELK has as its source of information the service logs.
So it's useful to analyze anything that generates a log file or is able to send logs to a remote syslog server: webservers, ftp, streaming, system logs, routers, firewalls, IDS/IPS, iptables... anything!!!

Visit official Elastic site for full details and information ont the stuff:
Elastic official website
Official introduction to ELK stack
Official itroductory video

How does it work? what does it do with the logs?

The basic idea, the core, is to 'teach' ELK how your program program writes a log line:
Every log line contains a set of information... date, time, source IP, user agent, ... it has a certain structure, a format.
By tayloring an appropriate filter for a given log line format, ELK is able to split all the information contained on every log line produced, getting every piece of information perfectly classified.
At this point, the party starts: it is able to add additional information, discard useless one, and things like that.
It then finally stores all this data into a database.
The process repeats everytime a new log line is produced.

The result is that all the info that is contained in a hard to read huge log files, gets perfectly stored into a queryable database.
Then you can query the database, asking from certain info, using conditionals, time spans, and so on... or configure a plethora of graphs, histograms, tables, bars, maps, piecharts, and more! into amazing dashboards that are automatically updated from the database information in real time.

Why does it comes as a separate set of programs?

Well, because that way you can trully escalate:

  • You can install just the minimum necessary parts at one single server that monitors its own produced syslog...

  • Or you can have a big cluster of servers, for high capacity and redundancy, running missions (and ELK pieces) separatelly:
    Servers running just the Database/storage software part...
    Servers doing just ingest, filtering and analyzing, running just the 'filtering' parts...
    More servers dedicated to just to perform queries...
    Servers that automatically query the database and generate and serve dashboards...
    Servers doing cache (we use Redis, for instance)...
    And finally, on the true bussiness servers (for instance a cluster of hosting servers), have running just a single, lightweight, piece of the ELK software stack that simply watches changes on log files, reads them, and send new log lines to the ELK ingest system.

As you see, ELK is designed from the very beginning 'thinking big'... but you can run it perfectly in a single PC!!!!

What are the components of the ELK stack?

ELK stands for Elasticsearch, Logstash and Kibana.
There may be other pieces on the mix, it is really flexible, but these 3 are the basic ones:

  • Elasticsearch is the database software. It is cluster oriented, and provides also the interface for querying the database.
    Since latest 5.X versions filtering abilities have being added.

  • Logstash is the program that is capable of read a log file, get log files listening on TCP/UDP (like a syslog server), or other forms of input.
    Then it analyzes every log line, applies filters, queries external databases to enrich the info (Very typical is GeoIP, to get geolocation from IP addresses on log lines), transforms the data based on criteria...really flexible!
    Finally it is able to 'inject' that information into an Elasticsearch database, put it on a cache server, or other forms of output.

  • Kibana is, as of 5.X version, a node.js application that includes an standalone webserver (no longer need to a standard webserver such Apache2 or NGINX).
    It has the ability to generate visualizations, graphs, statistics and so, creating dashboards... it is visually incredible!
    It queries Elasticsearch, generating the presented data, allowing us to visually query the database, filter the data, and so on...

Getting the Elastic Debian repository

Elastic repository uses secure https transport, and also has its packages in a repository that needs a GPG key...

So, first be sure you have apt https enabled:

apt-get install apt-transport-https

 
Then get the GPG key and add it:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -

 
Finally add the repository source list file:

echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-5.x.list

 
Now perform apt update:

apt-get update

 

Getting Java 8

If you're running Debian Stretch (or using Jessie-backports), there is OpenJDK JRE for Java 8 already packaged.
ELK works with OpenJDK, although ORACLE Java is the one often recommended around, specially for production environments.
I have to say that my whole ELK 1.5 ifrastructure is running OpenJDK JRE 7 quite well for a log time...
But remember, ELK 5.X requires Java 8.

Having JRE Java 8 for a headless server in Debian Stretch is as simple as:

apt-get install openjdk-8-jre-headless

 
If you're on stable Jessie, you'll have to install ORACLE's Java 8 JRE... here's a quick howto:

First install the 'java-package' package, available on the Debian repository.
It allows us build up a .deb package, fully automatic, from original ORACLE Java 8 tar.gz sources using the make-jpkg command.
I found that building latest Java 8 versions failed due to a missing dependency: libXtst.so.6, whis is provided by libxtst6 package, so install it too:

apt-get install java-package libxtst6

 
Then Visit Oracle Download center and Get latest Java8 JRE.
Pay atention when selecting your download... Linux, 64bit, tar.gz
The "server" version is suitable for servers that do not have a desktop running (no browsers, so no need to java plugin stuff).
When I did last week, upon accepting license, I downloaded the 53MB file: server-jre-8u131-linux-x64.tar.gz to give you an idea.

Now we're ready to build Java 8.
This has to be made as a non-root user...so:

su someuser
make-jpkg server-jre-8u131-linux-x64.tar.gz

 
This builds up Java and creates the package file oracle-java8-jre_8u131_amd64.deb
Again as root, install it:

dpkg -i oracle-java8-jre_8u131_amd64.deb

 
Check it out:

root@server:~# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

 
We got it! ... we are ready at this time to start installing and configuring ELK parts.
In the next article I'll post my notes on howto install and do a basic setup of Elasticsearch.

Bye!!!