I usually sort by memory ("M" in top) to troubleshoot these kinds of things--that shows you the amount of real memory that each process is using (and touching frequently enough to keep it off the least-recently-used queue for being swapped).
VIRT = RES + SWAP
Another thing to check is whether /tmp is a tmpfs file system and if something is writing a lot of data there.
I am actually a little confused by what I'm seeing. Is this sar output over the interval when your outage occurred or just the default output? And the top output is from a totally different time, 14:32?
Also, it's not really using swap at the time you took these stats because it doesn't need to--nearly 3G of your memory is currently being used as disk cache ("kbcached") and you only have kbmemused - kbcached + kbbuffers = 664072KiB (648MiB) [at 04:40:01] in use by actual processes.
Because no process image is using much memory itself but yet the oom-killer started, then I would guess that something started performing a lot of file I/O and started dirtying pages faster than could be written to disk. I'm not really sure that should trigger the oom-killer though.
None of these dirty pages would go to swap, because it's about as easy to write the content of the file itself out as it is to write the data to swap.
The obvious guess is that mysqld was doing this, although I would suspect that it would open its files with O_DIRECT, which suggests to the kernel to minimize effects on the cache (with the premise that the DB server is doing its own caching).