security talk & philosophizing



Suricata + ELK [Installation]

Technically, this install should be described as: Suricata / Filebeat + ElasticSearch/Kibana but it makes for a poor headline.

Architecture

In a multi-suricata server environment, the ElasticSearch Server is paired with the Kibana GUI. Individual Suricata installs are setup with Filebeat agents on separate points in the network(s). Filebeat sends each Suricata machine’s log data back to ElasticSearch for processing into Dashboards and reports.

This would be a useful setup if each Suricata was on various datacenters:

One Elastic Search server will gather data from various Suricata servers. Kibana is setup with ElasticSearch so that it has a nice GUI frontend, while Filebeat is installed on each Suricata installation, allowing for the passing of log data between servers.

IDS vs IPS

You’ll perhaps know the difference between an IDS and an IPS. However, if you’re like me and getting interested in this subject of IDS and IPS I’ll give a quick overview: An IDS detects problems, and an IPS prevents prevents them. Software, like Snort or Suricata, can take both roles depending on the defined rules it uses for operation.

Rules

With an IDS or IPS it is the rules that define the course of action. Not only can these rules be updated over time, but we can create our own rules.

Below, a rule is shown as an example. This particular rule is from the tgreen/hunting ruleset.

alert tls any any -> any any (msg:"TGI HUNT BurpSuite string in TLS"; flow:established; content:"|0b|PortSwigger"; distance:1; within:12; reference:url,portswigger.net/>

The rule is fairly readable. The rule is alerting (not blocking), and is a TLS rule based on PortSwigger’s BurpSuite scanner. If a network condition matches this rule an alert will be created.

This particular rule comes from the tgreen/hunting library (one of several sources that auto update). These libraries are enabled via a simple Suricata command. Many people craft their own rules based on attacks, malware, and scans presented in captured PCAPS. By reading a PCAP, details can be selected to craft a rule.

ELK

ELK of course represents: Elasticsearch, Logstash, Kibana. Logstash is the file transfer system, to move the log data from server to server.

ElasticSearch is a query based data analysis package (like a search engine for data). Kibana is a front end GUI for ElasticSearch. With Kibana we can directly input queries, create graphs and reports, and even dashboards.

The difference between Logstash and Filebeat can be found from articles like https://logz.io/blog/filebeat-vs-logstash/

Installation of an ElasticSearch/Kibana Server

Note: much of the install process is from Digital Ocean.

For reference, here are the packages and versions I required for the ELK server:

  • elasticsearch=7.17.8
  • kibana=7.17.8
# most of this is from https://www.digitalocean.com/community/tutorials/how-to-build-a-siem-with-suricata-and-elastic-stack-on-debian-11

curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

sudo apt update

sudo apt install elasticsearch=7.17.8

sudo apt install kibana=7.17.8

sudo nano /etc/elasticsearch/elasticsearch.yml
## Modify the network section to add:
network.bind_host: ["127.0.0.1", "your_private_ip"]

## At end of file append:
discovery.type: single-node
xpack.security.enabled: true

# without the following daemon-reload I couldn't start the elasticsearch service.
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
sudo systemctl start elasticsearch.service

# elastic search needs to be running in order to do the next part:
cd /usr/share/elasticsearch/bin
sudo ./elasticsearch-setup-passwords auto

# Log the output passwords someplace, it will be used later on

cd /usr/share/kibana/bin/
sudo ./kibana-encryption-keys generate -q

# save the xpack output for next step

sudo nano /etc/kibana/kibana.yml

# paste the xpack generated data at the end of this file

# while in Kibana.yml look for server.host: "your_private_ip" and set it to your private ip.

sudo ./kibana-keystore add elasticsearch.username

# when prompted for the elastic search.username, input:
> kibana_system

sudo ./kibana-keystore add elasticsearch.password

# when prompted for a password, enter the kibana_user password that was generated earlier on.

sudo systemctl start kibana.service

If the above steps worked out well, you should be able to go to your server:5601 and get a login screen. You login with the username elastic and use the password generated for that user during the elastic search install.

Suricata + Filebeat

While there are several IDS choices to look at, I really liked the community and seeming ease of install of Suricata. I’m not saying it’s better, or worse, than anything else – just that I found great documentation on setting it up with a GUI (using ELK.)

Installation of Suricata and Filebeat

  • suricata=1:6.0.1-3
  • filebeat=7.17.8
sudo apt update

sudo apt-get install suricata=1:6.0.1-3

sudo systemctl enable suricata.service

sudo systemctl stop suricata.service

sudo nano /etc/suricata/suricata.yml

# in the suricata.yml you need to change the interface value. By default it is eth0.
# if your interface is not eth0, you'll need to update around line 580, so that eth0 becomes your interface

# while editing the suricata.yml, modify the default location of the rules to:

/var/lib/suricata/rules

# towards the top of the yml file, modify the HOME_NET to represent your network, such as 192.168.1.1/24
# at the end of the file append:
detect-engine:
  -rule-reload: true

# save and exit the yml file

sudo suricata-update update-sources
sudo suricata-update list-sources

# listing sources gives an idea of the various sources you can enable.

sudo suricata-update enable-source et/open
sudo suricata-update enable-source tgreen/hunting
sudo suricata-update
sudo systemctl restart suricata

# while it's restarting, tail the suricata.log:

sudo tail -f /var/log/suricata/suricata.log

# If there are any errors with the rules they will appear in this log output. You can also use the test to find rule errors:

sudo suricata -T -c /etc/suricata/suricata.yaml -v

### Filebeat install on Suricata server

curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

sudo apt update
sudo apt install filebeat=7.17.8

sudo nano /etc/filebeat/filebeat.yml

# search for the Kibana host section and add:
host: "IP of your ELK server:5601"

# search for the output.elasticsearch section and add a line:

hosts: ["IP of your ELK server:9200"]

# after this add the username for elastic and the password generated during the elastic search install

username: "elastic"
password: "the password generated during the elastic search install"

# save and exit the yml file

sudo filebeat modules enable suricata

sudo filebeat setup

sudo systemctl start filebeat.service

At this point the installs should be complete. We have 1 ELK server and 1 Suricata server. You can add more Suricata servers, as referenced here – making sure the filebeat yml points to the single ELK server.

Dashboard

Log in to your ELK server (ELK IP:5601) using the elastic username/password you setup.

After logging in, click in the search bar at the top and enter:

type: filebeat suricata

That should filter down to a few results. One of those results will be:

[Filebeat Suricata] Alert Overview

and another will be [Filebeat Suricata] Events Overview

Click to load those up. It may take a bit of time to load the data, but as long as you are getting data passing through the Suricata logs (either the /var/log/suricata/eve.json or /var/log/suricata/fast.log) data should populate these dashboards.