Skip to main content
Copy edited.
Source Link
Peter Mortensen
  • 2.2k
  • 6
  • 23
  • 24

A firewall is certainly not needed for smaller setups. If you have one or two servers, software firewalls are maintainable. With that said, we don't run without dedicated firewalls, and there are a few reasons why I maintain this philosophy:

Separation of Roles

Servers are for applications. Firewalls are for packet inspection, filtering, and policies. A web server should worry about serving web pages, and that's it. Putting both roles in one device is like asking your accountant to also be your security guard.

Software is a moving target

The software on the host is always changing. Applications can create their own firewall exceptions. The OS is updated and patched. Servers are a high-traffic "admin" area, and your firewall policies/security policies are often way more important to security than your application configurations. In a Windows environment, suppose somebody makes a mistake at some Group Policy level and turns Windows Firewall off on the desktops PCs, and doesn't realize that it's going to get applied to the servers. You're wide open in a matter of clicks.

Just speaking to updates, firewall firmware updates generally come out once or twice a year, while OS and service updates are a constant stream.

Reusable services/policies/rules, manageability

If I set up a service/policy called "Web Server" once (say TCP 80 and TCP 443), and apply it to the "web servers group" at the firewall level, that is much more efficient (a couple of configuration changes) and exponentially less prone to human error than setting up firewall services on 10 boxes, and opening up 2 ports x 10 times. When that policy needs to change, it's 1 change vs. 10.

I can still manage a firewall during an attack, or after a compromise

Say my host-based firewall + application server is getting attacked and CPU is off the chart. To even start to figure out what's happening, I am at the mercy of my load being less than the attacker's to even get in and look at it.

An actual experience - I once messed up a firewall rule (left ports to ANY instead of a specific one, and server had a vulnerable service), and the attacker actually had a live Remote Desktop session to the box. Every time I'd start to get a session, the attacker would kill off/disconnect my session. If it wasn't for being able to shut down that attack from an independent firewall device, that could have been a lot worse.

Independent Monitoring

The logging in dedicated firewall units is usually far superior to host-based software firewalls. Some are good enough that you don't even need external SNMP / NetFlow monitoring software to get an accurate picture.

IPv4 Conservation

NoThere is no reason to have 2 IPstwo IP addresses if one is for web and one is for mail. Keep the services on separate boxes, and route the ports appropriately via the device designed to do that.

A firewall is certainly not needed for smaller setups. If you have one or two servers, software firewalls are maintainable. With that said, we don't run without dedicated firewalls, and there are a few reasons why I maintain this philosophy:

Separation of Roles

Servers are for applications. Firewalls are for packet inspection, filtering, and policies. A web server should worry about serving web pages, and that's it. Putting both roles in one device is like asking your accountant to also be your security guard.

Software is a moving target

The software on the host is always changing. Applications can create their own firewall exceptions. The OS is updated and patched. Servers are a high-traffic "admin" area, and your firewall policies/security policies are often way more important to security than your application configurations. In a Windows environment, suppose somebody makes a mistake at some Group Policy level and turns Windows Firewall off on the desktops PCs, and doesn't realize that it's going to get applied to the servers. You're wide open in a matter of clicks.

Just speaking to updates, firewall firmware updates generally come out once or twice a year, while OS and service updates are a constant stream.

Reusable services/policies/rules, manageability

If I set up a service/policy called "Web Server" once (say TCP 80 and TCP 443), and apply it to the "web servers group" at the firewall level, that is much more efficient (a couple configuration changes) and exponentially less prone to human error than setting up firewall services on 10 boxes, and opening up 2 ports x 10 times. When that policy needs to change, it's 1 change vs. 10.

I can still manage a firewall during an attack, or after a compromise

Say my host-based firewall + application server is getting attacked and CPU is off the chart. To even start to figure out what's happening, I am at the mercy of my load being less than the attacker's to even get in and look at it.

An actual experience - I once messed up a firewall rule (left ports to ANY instead of a specific one, and server had a vulnerable service), attacker actually had a live Remote Desktop session to the box. Every time I'd start to get a session, the attacker would kill off/disconnect my session. If it wasn't for being able to shut down that attack from an independent firewall device, that could have been a lot worse.

Independent Monitoring

The logging in dedicated firewall units is usually far superior to host-based software firewalls. Some are good enough that you don't even need external SNMP / NetFlow monitoring software to get an accurate picture.

IPv4 Conservation

No reason to have 2 IPs if one is for web and one is for mail. Keep the services on separate boxes, route the ports appropriately via the device designed to do that.

A firewall is certainly not needed for smaller setups. If you have one or two servers, software firewalls are maintainable. With that said, we don't run without dedicated firewalls, and there are a few reasons why I maintain this philosophy:

Separation of Roles

Servers are for applications. Firewalls are for packet inspection, filtering, and policies. A web server should worry about serving web pages, and that's it. Putting both roles in one device is like asking your accountant to also be your security guard.

Software is a moving target

The software on the host is always changing. Applications can create their own firewall exceptions. The OS is updated and patched. Servers are a high-traffic "admin" area, and your firewall policies/security policies are often way more important to security than your application configurations. In a Windows environment, suppose somebody makes a mistake at some Group Policy level and turns Windows Firewall off on the desktops PCs, and doesn't realize that it's going to get applied to the servers. You're wide open in a matter of clicks.

Just speaking to updates, firewall firmware updates generally come out once or twice a year, while OS and service updates are a constant stream.

Reusable services/policies/rules, manageability

If I set up a service/policy called "Web Server" once (say TCP 80 and TCP 443), and apply it to the "web servers group" at the firewall level, that is much more efficient (a couple of configuration changes) and exponentially less prone to human error than setting up firewall services on 10 boxes, and opening up 2 ports x 10 times. When that policy needs to change, it's 1 change vs. 10.

I can still manage a firewall during an attack, or after a compromise

Say my host-based firewall + application server is getting attacked and CPU is off the chart. To even start to figure out what's happening, I am at the mercy of my load being less than the attacker's to even get in and look at it.

An actual experience - I once messed up a firewall rule (left ports to ANY instead of a specific one, and server had a vulnerable service), and the attacker actually had a live Remote Desktop session to the box. Every time I'd start to get a session, the attacker would kill off/disconnect my session. If it wasn't for being able to shut down that attack from an independent firewall device, that could have been a lot worse.

Independent Monitoring

The logging in dedicated firewall units is usually far superior to host-based software firewalls. Some are good enough that you don't even need external SNMP / NetFlow monitoring software to get an accurate picture.

IPv4 Conservation

There is no reason to have two IP addresses if one is for web and one is for mail. Keep the services on separate boxes, and route the ports appropriately via the device designed to do that.

Source Link
Brandon
  • 2.8k
  • 2
  • 24
  • 28

A firewall is certainly not needed for smaller setups. If you have one or two servers, software firewalls are maintainable. With that said, we don't run without dedicated firewalls, and there are a few reasons why I maintain this philosophy:

Separation of Roles

Servers are for applications. Firewalls are for packet inspection, filtering, and policies. A web server should worry about serving web pages, and that's it. Putting both roles in one device is like asking your accountant to also be your security guard.

Software is a moving target

The software on the host is always changing. Applications can create their own firewall exceptions. The OS is updated and patched. Servers are a high-traffic "admin" area, and your firewall policies/security policies are often way more important to security than your application configurations. In a Windows environment, suppose somebody makes a mistake at some Group Policy level and turns Windows Firewall off on the desktops PCs, and doesn't realize that it's going to get applied to the servers. You're wide open in a matter of clicks.

Just speaking to updates, firewall firmware updates generally come out once or twice a year, while OS and service updates are a constant stream.

Reusable services/policies/rules, manageability

If I set up a service/policy called "Web Server" once (say TCP 80 and TCP 443), and apply it to the "web servers group" at the firewall level, that is much more efficient (a couple configuration changes) and exponentially less prone to human error than setting up firewall services on 10 boxes, and opening up 2 ports x 10 times. When that policy needs to change, it's 1 change vs. 10.

I can still manage a firewall during an attack, or after a compromise

Say my host-based firewall + application server is getting attacked and CPU is off the chart. To even start to figure out what's happening, I am at the mercy of my load being less than the attacker's to even get in and look at it.

An actual experience - I once messed up a firewall rule (left ports to ANY instead of a specific one, and server had a vulnerable service), attacker actually had a live Remote Desktop session to the box. Every time I'd start to get a session, the attacker would kill off/disconnect my session. If it wasn't for being able to shut down that attack from an independent firewall device, that could have been a lot worse.

Independent Monitoring

The logging in dedicated firewall units is usually far superior to host-based software firewalls. Some are good enough that you don't even need external SNMP / NetFlow monitoring software to get an accurate picture.

IPv4 Conservation

No reason to have 2 IPs if one is for web and one is for mail. Keep the services on separate boxes, route the ports appropriately via the device designed to do that.