1

I currently have a medium-sized website, that probably has a few security flaws. That's probably normal, I think. You can't catch everything. The problem is, I also have a couple script kiddies whom think its fun to try day and night to attempt to crack my website, to do who knows what. Probably something like deleting the DB. I have backups, but it's still a toll on RAM and CPU, and I'd prefer to stop it. Is there any way I can analyze the server logs to easily find out which entries are caused by the script kiddies? They'd probably be identified by multiple hits per minute, but it's a pain to go through and pick out those entries when I could be doing something worthwhile.

2
  • If you are just going to accept security flaws and do nothing about them you are just setting yourself up as a target, so what did you expect would happen? Commented Aug 24, 2009 at 6:01
  • There's a difference being not finding every flaw and not doing things about exploited flaws. I doubt anyone could examine a reasonably sized website and find all security flaws that exist. Commented Aug 24, 2009 at 9:51

6 Answers 6

1

cat access_log | awk '{print $1}' | sort | uniq -c |sort -g

should produce an ordered list of ip addresses that are hitting your site, the first column will be the number of hits, the second the ip.

You might have to change the value $1 this is the position of the ip address field in the logfile line. On my webserver its first hence $1, otherwise a field is defined as 'separated by white space' so the next entry is $2 etc.

1

Imho you should rather spend your time fixing your website, then you won't have to scan your logfiles all the time..

AWStats or Webalizer are well-known webserver log -> statistic tools, maybe you could get some use out of that.

1
  • Even the best made websites' need logging and the logs need to be analysed. If you don't know how to 'read' your logs, it just defeats the purpoise. Commented Aug 24, 2009 at 8:02
1

I don't know what "medium-sized" website is for you. But if it is large enough to have the DB and the webserver on two different servers then you could use a database firewall like e.g. GreenSQL. This will give you some more information how they want to do it. But you will still need a http log analyzer to find out where they are attacking (What form they try to missuse).

1

If you are using apache, you may want to look into implementing mod_security ( http://modsecurity.org/ ) - it can provide a level of protection against some kinds of attacks even if the underlying application is vulnerable. It doesn't catch everything, but it can help in the situation where you don't necessarily control the code you're running.

0

Here is a somewhat similiar question that I asked recently that may have a solution for you:

New to Ubuntu Server, which logs to monitor and what to do

There are alot of free log analyzers that will all tell you the same info. Do a Google search, test drive a few, and see what you like.

I dont think I would stop backing up the site. You should always have backups around. Too many things can happen that could require those backups.

So, check the logs, make backups, keep your system patched, use strong passwords, firewall (someone recently recommended etc...

I am sure you will get some good advice here.

1
  • I think the questioner is looking for a more specific recommendation than your suggestion of a Google search. Are there any log analyzers that you have some experience with, and which you would recommend? Commented Aug 23, 2009 at 18:44
0

Even if you do find the entries belonging to the script kiddies, I doubt that this would help you do anything about it. You can't just lock them out by blocking those IP addresses. Most people have dynamic IP addresses assigned by their ISP's.

Besides, could log files could not show you all attacks. For example, what about brute force SSH login attempts?

IMO the only sane approach for your problem is trying to fix as many security flaws as you can with reasonable effort and having backups in case of emergency.

3
  • In most cases, they'll not be humans directly at the end of these attacks (if they are, and they're smart ones, you'll always lose in the end). They'll be automated, and being directed at targets chosen because of a scan for the software they attack, or sometimes even at random. If they're causing a performance problem, then you could look to block on the fly, or you could attempt to implement some form of rate limit that wouldn't impact real users. Commented Aug 24, 2009 at 10:44
  • I know about these kinds of attacks, I have in fact installed something like this for our LAMP server. Still, it's only a very limited countermeasure. The effectivity depends very much on how many connections per user you allow and you have to set the threshhold relatively high so that you don't punish good clients who are just using lots of threads for requesting your web page. Anyway, he was asking for means to protect his website from attacks such as deleting his database. You can't prevent those by analyzing your server logs. Commented Aug 25, 2009 at 8:31
  • I understand what you're talking about. I don't actually intent to block IP addresses, but I thought it's a good idea to identify who's the problem before deciding what action to take. However, the reason I asked was because I know that the main problem is from attempting to exploit forms and such. Commented Aug 25, 2009 at 11:15

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.