security talk & philosophizing



Security Setup at work, home or to learn

There are a lot of tools for security. Tools that get the most attention are perhaps offensive ones, but defensive tools and skillsets are very much needed. In this article, I wanted to mention the security measures that I would put in place if I were to jump into a team that had little to no security measures in place, or if I were learning about cybersecurity I would install these elements locally so that I could grow understanding of similar products.

Website Protection (DDOS)

If a website, I’d setup an account with Cloudflare and enable DDOS mitigation measures. You’ll need contact and escalation paths recorded in a playbook (more on playbooks further down). This is a must, because you don’t want to deal with this during/after a DDOS. Having a paid subscription goes a long way, as Cloudflare support teams can assist with the heavy lifting against an attack.

As a Student…

While Cloudflare caters to enterprise level customers, they do offer free tiers. I’ve used such tiers to expose internal web applications at my home lab, to the internet via Cloudflare. It’s a great way to get to know the Cloudflare environment and possibilities.

As an Employee…

Understanding services like Cloudflare can go a long way towards protecting web applications. The greatest learning catalyst is of course need. But it’s great to have an edge in how this works before the need occurs.

Log and Traffic Analysis

Having a stack put in place that monitors traffic is paramount. Alerts on traffic origination, quality (abusive sources), endpoint access, error levels being generated (400’s, 500’s) and correlating any/all of this data back to IP addresses is vital in understanding the security state at any moment. What makes this really work are the dashboards that one can create. Dashboards reveal anomalies instantly. On top of dashboards, we can create notifications that trigger at various thresholds (such as an error 500 count over 100 in a span of an hour).

Elastic (ELK) Stack

I use ELK. ELK has various tiers: free to enterprise. The paid versions offer more AI/Machine Learning capabilities, as well as a “fleet” and cloud options. As of version 8+, ELK’s security modules seem to fall more into a paywall as well. However, if you version lock to 7.* ELK is more accessible.

ELK is the foundational technology used with SecurityOnion, Wazuh and other software packages. What ELK excels at is the rapid response of large data set queries. ELK also makes it easy (drag and drop) to visualize various data fields, and create working dashboards.

While ELK’s notification system (email alerts) is part of a paid pricing model, there are alternatives to set up email alerts from 3rd parties (such as Yelp!’s implementation that is freely available on GitHub).

Dashboards

For ELK dashboards, I like to create several modules that display relevant information for a concept. A HAPROXY dashboard would make great use of URL’s hit with aggregated counts and columns of how the traffic data falls into 200, 300, 400, or 500 level responses. I also like to see graphed data on sources and destinations, as well as the different servers that HAPROXY manages.

For a websever specifically, I like to make use of the NGINX default dashboard, or modify it to show more details on browser type/version breakdown.

For Suricata, I tend to use the default dashboard supplied by Suricata – showing tabs for events, and alerts. Each having tables and visualizations about threat activity.

Setup Highlights

ELK is setup as a server, either in a single node or multi-node framework.

Public endpoints, like Load Balancers and Web Servers, should have Filebeat installed, which works in tandem with ELK’s Logstash. Together then turn logs into data elements, parsed into the ELK server for review. Filebeat comes with available modules that cater to the biggest names in technology (Apache, HAPROXY, NGINX, etc.) Installing a module, configures Filebeat in how it parses the logs to generate data elements, which are sent to ELK. Details on configuring this are found on this blog.

ELK will require Index Lifecycle Management setup, otherwise you’ll run out of disk space. You want to cap how big your indexes can grow and when to rotate them out (delete/prune).

As a student…

In short, while ELK features can fall into a paywall, if you do onsite hosting, version lock and make use of 3rd party solutions, it can be something installed in a home network. Setting aside an old machine, installing Debian/Ubuntu on it, and repurposing it for this work would be easy enough to do. I have installation walkthroughs on this site, covering the process.

As a student, you could setup ELK server on a machine or Virtual Machine running on a machine. Other machines, or Virtual Machines can be setup with a webserver and Filebeat could be installed to consume the logs and ship log data back to the ELK server. From there, you can create dashboards to generate near real-time data visualizations.

Stepping through this process, conquering hurdles along the way, will go a long way in establishing yourself in IT/DevOps/Cybersecurity.

As an employee…

If your company doesn’t have someone analyzing traffic, or have a traffic monitor (like ELK), you can add this role to your existing duties. If you’re passionate about security, this is probably something you’d prefer doing anyway. Create proof of concepts in small virtual environments and then pitch this to the stakeholders to get their sign off on moving the concept into a production monitoring configuration.

If the company is wary of using it in production, you can start with monitoring pre-live environments (Dev, QA, Stage). I have an article on using ELK to monitor QA labs.

SIEM

The ELK stack can be used to power a SIEM. Typically an IDS or IPS is setup to perform packet analysis and report back to the ELK server what events might be threats to alert on.

A company shouldn’t neglect coverage with an IDS. While it may be tempting to put more effort in expanding infrastructure, without an IDS a company is simply increase attack surface area. A datacenter is an ideal place for dropping an IDS (such as Suricata), to monitor for threat activity.

While an IDS ships with rulesets, users can craft their own specific rules for events witnessed across the network.

As a Student…

Opensource IDS’ like Suricata can be easily installed on a Linux box. I have articles about installing Suricata on this website. Try installing an IDS like Suricata or SNORT and get accustomed to the process and flow. Make your own notes and comment in the notes about any problems you had to overcome.

Suricata doesn’t have to ship logs to ELK, but it makes the process very smooth and the visualizations are well worth it.

Getting local traffic routed to your Suricata box may be challenging, but there are several ideas on how to accomplish this:

  1. Buy a network tap, run a cable from the network tap (between the router and modem), to your Suricata machine.
  2. Invest in a monitored switch that allows port mirroring. How this works, is that you plug the router into a plug/port on the switch (say plug/port #1) and then mirror the traffic from that to port #5. Then you plug a cable from #5 on the switch to your Suricata machine.

In either case above, you should get traffic. One other option (but a bit more advanced), is to stand up a web application and server (i.e. Apache or Nginx) on the Suricata machine. Then, using Cloudflare, map the local application to an internet domain you own. Details for this can be found on Cloudflare’s zero trust section, under “Creating your First Tunnel.”

Once traffic is coming into the server, you’ll be able to get a reasonable idea of how an IDS works. You can add extra rules from Suricata or create your own. You can even ask Chat-GPT questions, like “create a Suricata rule that detects reverse shells” and see what you get.

As an Employee…

If you have permission to utilize an internal network, you can try out Suricata as a proof of concept on a Dev, QA or Stage environment. In time, if the powers-that-be realize the power of the IDS, you can work on getting it rolled out to a production environment.

Memory / CPU Disk Monitoring

ELK has several “beats.” A beat is a type of data shipper. Filebeat sends data from logs or files. Metricbeat sends metric based data. It’s like data from an Activity Monitor or “top”. What’s great about sending data on metrics to an ELK server, is that the data is historic. Meaning, if you notice a machine is down, you can scroll through the historic CPU and Memory to see when it became unstable and what process might have caused it.

As a Student…

Installing metricbeat is free and easy. Once you have ELK installed, you simply configure the metricbeat to talk to the ELK server IP and start it up. You can create dashboards showcasing memory, cpu and hard disk utilization.

This can be useful for monitoring internet endpoints, or public facing ones. I’ve seen utilities in DevOps teams that show real time data, but sometimes it isn’t useful for spotting the process that is causing performance issues or outages.

As an Employee…

If you get permission to run Metricbeat on a proof of concept machine, it can go a long way to showing what you can do to better help Quality Assurance teams discover break points.

Playbooks

Playbooks are essential. These are documents detailing the step by step recovery from a problem state. Playbooks can also be used for policy violation. If a scan returns a negative response to a HIPA compliance rule, or a threat actor is discovered on a machine, or an computer is infected with a virus – the worst thing to do is figure out what’s next in the moment. Having a step by step set of instructions for what to do next is imperative.

Situations like DDOS’ or attacks that require 3rd party help, will require names and phone numbers of points of contact with these 3rd party vendors. Otherwise you’ll experience more downtime as you browse websites for that elusive direct number to top tier support.

EDR (Endpoint Detection and Response)

A company needs EDR. That became clear after I went through Offensive Security’s OSCP. Imagine this scenario, and ask yourself how you would detect it.

Imagine an attacker is attracted to your company. They OSINT (open source intelligence) your staff and discover names for several lead developers. Digging further, the attacker discovers that this lead developer has a hobby of comic collecting. The attacker spearfishes with a PDF referencing a sale of rare comics at amazing prices. The catch is that the PDF that will be emailed to the lead dev has been crafted with a reverse shell. Once open, it makes a call back to the attacker (passing the shell). Now the attacker has the same access level on that machine as the lead dev. Likely the lead dev checked and opened this email on their work laptop. Once on the device the attacker begins zipping and exfiltration of local private github code. As the dev is connected to the VPN during work hours, the attacker gets further access into the company, with the same access levels as the lead dev.

An EDR has a server that monitors agents installed on every company owned device (endpoint). These agents notify on application crashes, policy failures (HIPA, GDPR, PCI DSS), failed login attempts, and attack attempts. Unique rules can be crafted to check for things like reverse shells, which would alert on the scenario described above:

    <command>ss -nputw | egrep '"sh"|"bash"|"csh"|"ksh"|"zsh"' | awk '{ print $5 "|" $6 }'</command>

The above example is from Wazuh EDR’s reverse shell detection document.

There are several open source EDR’s and a lot of paid services. In the Open Source variety, I’ve personally used Wazuh as well as OpenEDR (Xcitium). Between the two of them, I prefer Wazuh. Wazuh can be run off a local server, or in the cloud. OpenEDR only works in the cloud. OpenEDR has a confusing pricing scheme… although I was in a free tier, I would get invoices each month! I also found OpenEDR (also known as Xcitium) as difficult to uninstall the agents.

Installing Wazuh

There are several ways to install Wazuh:

  • No install: use SecurityOnion which has Wazuh pre-installed
  • Ubuntu Install
  • Non-Ubuntu, Debian flavor (Kali, etc.) install
  • Docker Install

While you can taste Wazuh through the SecurityOnion OS/VM, I wouldn’t recommend it. Installing SecurityOnion itself is a challenging effort and requires multiple NICs, or virtual NICs.

The Ubuntu install of Wazuh is easy and straight forward. Everything just works. To see the documentation on this flow, just head over to Wazuh’s official documentation. Three components are installed: Indexer, Server and Dashboard. These can be installed as a single-node set up (all three on one VM or Machine), or each of those can be installed on a separate machine (multi-node). I opted for single-node. You can use the install scripts provided, but I used the step by step directions, starting with Indexer, then the Server and finally the Dashboard. No issues.

I also installed this on a Kali Purple machine. This was more hairy. For some reason the install scripts work amazingly well with Ubuntu and not so great with other Linux flavors. On Kali I had to do a lot of work arounds (such as making sure filebeat installed to version 7.10.2 and not the latest 7.17.* or 8.*).

Docker is a quick way to get setup, but the Docker container does not come with the necessary elements for email notifications. To get that set up you either need to install an SMTP server in the container, or externalize your OS install of the SMTP server to the Docker… which (imo) is complex.

GVM (Scanner)

Greenbone Vulnerability Scanner is a great utility. It’s a challenge to get it running outside of docker, so I’d suggest getting a VM (with lots of disk/memory) to host it and run it as a docker container.

Most people talk about GVM (OpenVas) as a useful external scanner. However, the real power lies in the library scan. Logging in as an authenticated user with admin/root privledges, allows the scanner to scan a machine’s installed libraries and packages.

I have several articles on installing and setting up GVM on this website.

Application Security / Pentesting

Offensive testing (Pentesting), is manual effort from the Red Team. Using tools like Metasploit, Nmap, Dirb, Burp Suite, Maltego (and much much more), security testers will attempt to find holes and issues with software.

There are so many areas of this section, it’s beyond the scope of this article, but this site has many articles pertaining to penetration testing.

AntiVirus

I saved antivirus last because it’s simply the lowest hanging fruit. Yes, it finds known threats, but it lacks scope to discover unknown threats. This was made apparent to me, while taking a cybersecurity course about viruses. In the course we crafted our own trojans, which sent acted as a client/server model. OS commands were sent over http traffic, to the infected machine, and the software turned the strings: “ls” or “rm -r” to OS commands and execute them, reporting back the response. When the viruses were sent to VirusTotal, none of the 60 antivirus software flagged the files as viruses, this was due to the software bytes being unknown to antivirus and heuristic scanners weren’t sure if this was a simple webserver, or malicious.

Yes, antivirus is necessary, but it misses a lot of important areas for protection.

Conclusion

This is not a total list of tools or ideas for hands on learning, but it’s a start. Learning how to install ELK, or how to setup Suricata, or setting up an EDR in your home network, will go far. Most tool types have open source versions that we can install in our own personal labs to learn from and such work will go far in practical skillset growth.