I found the explanation and examples from this link very useful: What exactly is "iowait"?. BTW, for the sake of completeness, the I/O here refers to disk I/O, but could also include I/O on a network mounted disk (such as nfs), as explained in this other postthis other post.
I will quote a few important sections (in case the link goes dead), some of those would be repetitions of what others have said already, but to me at least these were clearer:
To summarize it in one sentence, 'iowait' is the percentage of time the CPU is idle AND there is at least one I/O in progress.
Each CPU can be in one of four states: user, sys, idle, iowait.
I was wondering what happens when system has other processes ready to run while one process is waiting for I/O. The below explains it:
If the CPU is idle, the kernel then determines if there is at least one I/O currently in progress to either a local disk or a remotely mounted disk (NFS) which had been initiated from that CPU. If there is, then the 'iowait' counter is incremented by one. If there is no I/O in progress that was initiated from that CPU, the 'idle' counter is incremented by one.
And here is an example:
Let's say that there are two programs running on a CPU. One is a 'dd' program reading from the disk. The other is a program that does no I/O but is spending 100% of its time doing computational work. Now assume that there is a problem with the I/O subsystem and that physical I/Os are taking over a second to complete. Whenever the 'dd' program is asleep while waiting for its I/Os to complete, the other program is able to run on that CPU. When the clock interrupt occurs, there will always be a program running in either user mode or system mode. Therefore, the %idle and %iowait values will be 0. Even though iowait is 0 now, that does not mean there is NOT a I/O problem because there obviously is one if physical I/Os are taking over a second to complete.
The full text is worth reading. Here is a mirror of this page, in case it goes down.