I use pfSense as my home router and firewall with the pfBlockerNG package to eliminate ads and trackers online. I love everything about it, except the reporting interface. It’s slow and clunky. I wanted to get the data into ClickHouse so I can create dashboards with Grafana. Unfortunately, pfBlockerNG only logs data to the local filesystem.
This post is for folks who want to export data in log files from pfSense to a central log server. We’ll cover how to do this for the pfBlockerNG DNSBL log, but it will work for any other service logs that don’t use syslog, like pfSense zeek.
I published two articles critical of Ansible dependencies and handlers. If you read those articles, you might be surprised that I really like Ansible. I spent 10 years bumping into all the sharp corners. In that time, I managed to create one of the most successful projects of my career, full lifecycle management of on-prem hardware with Ansible. It started as a playbook of re-usable tasks to perform firmware, kernel, and OS upgrades on hosts in our infrastructure. It soon grew with my help from my colleagues to provision, audit, decomission, and manage servers, switches, and firewalls.
Ansible made it possible for a small team of 8 to manage over 1,500 devices with enough capacity to support development efforts and innovate our own services and tooling. The support for remote management of blackbox devices like firewall, load balancers, routers, and switches provide capabilities to synchronize server changes with network and routing devices. The serial functionality and error sensitivity make it safe to point at a big batch of hosts and say “take these potentially destructive actions” and have it bail on the first sign of a problem.
Ansible is an amazing orchestration framework, but it is a poor choice for traditional CM systems where you want continual evaluation and correction to a determined baseline. In this article, we’ll explore the strengths and weaknesses.
In our last installment, we talked about the problem with Ansible dependency tracking. While annoying, the only side effect is longer run times. Ansible’s handlers are far more dangerous and problematic. I learned Ansible after spending 10 years working with Puppet. Ansible’s handlers seemed like a great way to emulate Puppet’s notify API. Unfortunately, Ansible’s handlers are not reliable and scoping means they may not cut back on repetitive processes.
Join me for a walk into madness as we collectively learn why you should avoid handlers and what you might try instead.
I spend a great deal of time using Ansible for both orchestration and configuration management. The Just-In-Time template evaluations unlock elegant and efficient workflows. I automated the full lifecycle of hardware in our datacenters, including provisioning, upgrading firmware on devices, and safely deleting and deprovisioning devices with Ansible. Due to the weight I ask Ansible to bear, I routinely uncover unexpected behaviors.
One of those quirks caused slow playbook run times. When using roles with dependencies, some parent roles execute multiple times per run. This increases run times unnecessarily due to Ansible’s linear execution. I developed a work-around to address it and thought you may enjoy it!
ClickHouse is an efficient and highly performant columnar database with a lot of impressive features. The use of MATERIALIZED VIEWS, which traditional RDBMS folks would call INSERT TRIGGERS allow you to chain inserts and aggregate data into other tables. Using this workflow, you can create entire data processing pipelines inside of ClickHouse. The native AggregatingMergeTree table is often used to aggregate data, but it’s not always to best solution.
Using a materialized view with a destination AggregatingMergeTree allows you to transform an event stream into a pre-rendered timeseries table. AggregatingMergeTree tables use special aggregating columns to store aggregation states. These aggregation states are not finalized until they are merged by an aggregate merge function. Aggregation states can be simple, like min() or max(), or complex, like uniq(). With simple states, only one or two values need to be stored. The min()/max() states need only store the current value. avg() can store the value and the number of data points so a new insert can calculate the new average and increment the number data points.
Complex aggregation states require more space to maintain. Depending on cardinality, uniqState() columns can be large; in one instance, the uniqState() required almost 50% of the storage space to store the full fidelity data in the source table. Performance using uniqMerge() against the AggregatingMergeTree table proved 3 to 5 times SLOWER than performing the uniq() query directly against the raw data.
In this article, we’ll explore a few different ways to work-around this issue and do something cool!
Monitorama is my favorite conference. Jason delivers an event that creates community and belonging with speakers who educate the audience on a wide array of topics. Every year, I hear attendees praising the event, the content, the venue, the location, and the sense of community. I think what most people don’t realize is this all intentional. Jason has gone to great lengths to create an event that ticks all of these boxes. I’ve had the privilege of helping him out with the event in Portland in since 2016, and the Berlin, Amsterdam, and Baltimore events. He’s shared a lot of his secrets to the event over the years, and I think it’s worth sharing some of the wisdom he’s shared with me and the other event staff over the years.
I joined Twitter in 2008. It allowed me to connect to the InfoSec community in a way I couldn’t in person at the time. I had a lot of positive experiences, and it opened a few doors for me professionally. Today, after reading about more senior folks resigning and rumors that Musk is searching for ways to monetize user data in unethical ways, it’s time to say good-bye.
I am now happily reliving the best experiences of early Twitter on the hachyderm.io Mastodon instance.
If you’re considering leaving Twitter, there’s a few things you might want to do to ensure your data isn’t used in whatever the off-the-rails cry-baby billionaire dreams up next.
For nearly 4 years, I dealt with high levels of stress in my life without seeking help. As a consequence, my stress response got stuck “on”. While I removed myself from the primary stressor, I took on new stress with an international move, new job, a new house, and reverse culture shock coming back to the USA. Even though these were mostly positive changes, my body kept the stress response active. I knew something was wrong, but I told myself I could manage it. I thrived in stressful situations. I knew my limits.
I was catastrophically wrong. My inability to recognize the severity of my situation lead to three devastating physical health issues I am still actively managing every day. I wish I had reached out for help sooner.
These are the steps I am taking to manage my mental, emotional, and physical health:
I started working with a mental health professional
I removed myself from stressful situations
I exercise regularly
I value my attention
I’d like to share my story of how the stress I experienced manifested physically. If for no other reason than to serve as a warning to folks currently dealing with anxiety and stress. I wish someone would’ve told me, “you don’t have to do this alone. It’s OK to ask for help even if you feel like others are in a worse place.”
While working at Booking.com, I was looking for a solution to logging that matched the ease of use and power as Graphite did for metrics. Reluctant to bring a new technology into production, I talked to co-workers and one mentioned that they were using ElasticSearch in some front-end systems for search and disambiguation. He mentioned hearing there were a few projects using ElasticSearch for storing log data.
This began my love-hate-love relationship with ElasticSearch. I’ve spent the past 8 years working with ElasticSearch professionally and in my spare time. Graphite and ElasticSearch are two projects that change the game in terms of exploring your data. The countless insights I’ve gained into system performance, application performance, and system and network security with these tools is unparalleled. Tools like Grafana and Kibana allow you to visualize your data quickly and beautifully. As a system and security engineer, sometimes this isn’t enough. I spend most of my day in a terminal and needed something to explore and pivot through the data there.
This is the first part, in a many part series about a tool I created to make ElasticSearch’s powerful search interface more accessible from the terminal. This tool has been essential to nearly every incident I’ve investigated. It was developed with the help, patience, and amazing ideas from co-workers both at Booking.com and now at Craigslist.
Full disclosure, I’m not a fan of systemd. I started working with Linux in the late 90’s and watched it grow from a marginalized operating system to the most dominant operating system in the datacenter. I’ve lived through so many “year of the Linux desktop” years I remember when it wasn’t a joke. From my vantage point, administering Linux servers professionally for nearly 20 years, systemd is Linux on the desktop at the cost of Linux in the datacenter.
Why do I feel this way? It’s mostly the reinvention and incorrect implementations of core UNIX tools and modalities. There’s a lot of information on systemd out there. There’s a lot of bias involved. So, today, I’m not going to talk about that. I am going to address a critical mistake in the systemd-resolved daemon which implements DNS lookups for systems running systemd.
I’ll jump right to the work-around. If you’re running a system which is using systemd, you should probably be running systemd-resolved configured to use a single DNS resolver, 127.0.0.1, and run Unbound. There are resources on how to configure and run Unbound, but the best is Calomel’s Unbound Tutorial. If you need to maintain consistent, reliable DNS resolution that’s compatible with previous versions of Linux, the only way to do that is to have a single DNS server in /etc/resolv.conf.
After getting a few questions from concerned folks about VPN services. I realized this might be better served as an article. This way anyone who is curious about how to protect themselves better online can reference it.
The Bad News
Well, there’s really no easy way to this: There is very little, if any, privacy on the Internet. Even after following all of the advice I’m about to give, all sorts of clever folks in the Valley and beyond are envisioning clever new ways to improve the “User Experience” (UX) and in the process accidentally creating newer, clever means to circumvent any and all privacy controls you might deploy.
In 2004, when I was starting a new job at the National Institute on Aging’s Intramural Research Program I began evaluating products to meet FISMA requirements for file integrity monitoring. We already purchased a copy of Tripwire, but I was being driven mad by the volume of alerting from the system. I wanted something open source. I wanted something that would save me time, rather than waste 2 hours a day clicking through a GUI confirming file changes caused by system updates and daily operations.
At the time, I found two projects: Samhain and OSSEC-HIDS. Samhain is a great project that does one thing and does that one thing very well. However, I was buried in a mountain of FISMA compliance requirements and OSSEC offered more than file integrity monitoring; OSSEC offered a framework for distributed analysis of logs, file changes, and other anomalous events in the same open source project.
I now work at Booking.com and manage one of the world’s largest distributions of OSSEC-HIDS. My team and I are active contributors to the OSSEC Community. After nearly a decade of experience deploying, managing, and extracting value from OSSEC, I was approached to write a book introducing new users to OSSEC. After 6 months of work, the book has been published!
We use ElasticSearch at my job for web front-end searches. Performance is critical, and for our purposes, the data is mostly static. We update the search indexes daily, but have no problems running on old indexes for weeks. The majority of the traffic to this cluster is search; it is a “read heavy” cluster. We had some performance hiccups at the beginning, but we worked closely with Shay Bannon of ElasticSearch to eliminate those problems. Now our front end clusters are very reliable, resilient, and fast.
I am now working to implement a centralized logging infrastructure that meets compliance requirements, but is also useful. The goal of the logging infrastructure is to emulate as much of the Splunk functionality as possible. My previous write-up on logging explains why we decided against Splunk.
After evaluating a number of options, I’ve decided to utilize ElasticSearch as the storage back-end for that system. This type of cluster is very different from the cluster we’ve implemented for heavy search loads.
If you haven’t looked at OSSEC HIDS, here’s the overview:
OSSEC is a scalable, multi-platform, open source Host-based Intrusion Detection System (HIDS). It has a powerful correlation and analysis engine, integrating log analysis, file integrity checking, Windows registry monitoring, centralized policy enforcement, rootkit detection, real-time alerting and active response.
It runs on most operating systems, including Linux, OpenBSD, FreeBSD, MacOS, Solaris and Windows.
OSSEC is a great product, but I ran into an issue when attempting to fulfill a require for PCI-DSS which involved reviewing our LDAP logs. I knew OSSEC would make this simple. I started writing a rule and realized I had hit a significant roadblock. OpenLDAP logs events as they happen and only logs data relevant to that particular event. A connect event has the ports and IPs, and the bind event contains the username, but only the connection id is the same in the two events.
I do most of my work over SSH. Even when I’m working in my browser or pgAdminIII, I’m usually doing that over SSH tunnels. VPN Software has been around for quite some time and it’s still mostly disappointing and usually run by the least competent group in any IT department. I developed a workflow using SSH from my laptop, either on the corporate network or at home, I can ssh /directly/ to the server I’m interested in working on.
In order to accomplish this, I have made some compromises. First off, if I’m SSH-ing from my home, I am /required/ to type the fully qualified domain names (FQDN) when workign remotely. I use the presence of the domain name to activate the proper leap frogging. I also decided to use ControlMaster’s with SSH that can leave me with a terminal without a prompt when I forget which shell is my master. Overall, the pros outweigh the cons and I’m more productive because of it.
First things first. I’ve stated that you should drop everything and install Graphite. If you didn’t already, please do that now. Go ahead, I’ll wait.
Good? Good. I don’t frequently insist on anything like I do with Graphite. There’s a lot of reasons for that. If you don’t believe me, please see @obfuscurity’s awesome Graphite series on his blog.
When you get back we’ll talk about how to monitor ElasticSearch with Graphite for fun and profit!
The reaction to my Central Logging post has been significantly greater and more positive than I could’ve expected, so I wanted to recap some of the conversation that came out of this. I am pleasantly surprised by most of the comments on the Hacker News Thread. So, here’s a real quick recap of the responses I’ve received. I will continue this series this weekend with more technical details.
I have worn many hats over the past few years: System Administrator, PostgreSQL and MySQL DBA, Perl Programmer, PHP Programmer, Network Administrator, and Security Engineer/Officer. The common thread is having the data I need available, searchable, and visible.
So what data am I talking about? Honestly, everything. System logs, application logs, events, system performance data, and network traffic data are key requirements to making any tough infrastructure decision, if not key to the trivial infrastructure and implementation decisions we have to make everyday.
I’m in the midst of implementing a comprehensive solution, and this post is a brain dump and road map for how I went about it, and why.
I married a Statistician, so this article sums the lectures I receive on a daily basis. Risk Management is statistical analysis, and I’m not sure how many folks in IT Security have Graduate level Stat exposure. So, the understanding of our statistical shortcomings is key. You need to read that entire article, twice.
As a programmer, I’ve had the concept of “don’t ever trust your users” beaten into my head. For programmers, this concept is incredibly important. Users almost always exceed your expectations for creativity with your new application. By planning for unexpected input, and properly cleaning all variables you can theoretically account for abuses of your system by malicious users and provide a graceful failure for users attempting to enter in bogus data.
This concept is key to programming. What I find astounding, is a large majority of corporations are adopting this practice for all IT related issues, and it’s even saturating into HR and other areas of employment. Working as a Security Administrator, I’m surprised that most employers have decided to not trust their employees. If you can’t trust them, then why would you hire them?
We’ve all found useful information on the web. Occassionally, its even necessary to retrieve that information in an automated fashion. It could be just for your own amusement, possibly a new web service that hasn’t yet published an API, or even a critical business partner who only exposes a web based interface to you.
Of course, screen scraping web pages is not the optimal solution to any problem, and I highly advise you to look into APIs or formal web services that will provide a more consistent and intentional programming interface. Potential problems could arise for a number of reasons.
“Regular Expression” is a fancy way to say “pattern matcher.” Humans can match patterns with relative ease. A machine has a bit more difficulty deciphering patterns, especially in text. As computing became more powerful, the methods for matching text grew into more flexible dialects.
Regular expressions can be one of the toughest concepts to grasp and use effectively in any programming language. Perl is no exception as its regular expressions engine is perhaps the most advanced regex engine in existence. Its power and flexibility also serve to confuse and intimidate many new comers. It is important to understand the Regular Expression engine as its often the cause of serious bottlenecks in programs of all shapes and sizes.