If what you wanted was all zeros, or to test memory performance, read from /dev/zero. Its performance does not compare directly to permanent storage file systems.
ZFS is designed for integrity, large volumes, and copy on write behavior. /dev/zero is implemented by copying a constant directly to userspace memory, by its simplicity and never touching block devices it is fast.
For taking a copy of an entire dataset, there are ZFS snapshots. These are documented in many tutorials, no need to go into detail here.
Spindles or SSDs are in general hundreds to thousands of times slower than DRAM, but this is not the only reason why your empty file system is slower. ZFS is just doing more things.
Whether reading from a snapshot or not, the reads still go through several ZFS subsystems, eventually physical I/O to block devices happens, and it is checksummed. There are several performance optimizations, including read ahead and caching. All of this takes significant work to hit gigabyte per second speed, even if the physical reads are mostly metadata of unallocated blocks.
Your microbenchmark could vary a lot depending on if the zfs dataset was compressed or not, and then you wrote zeros to it. Obviously, 100% zeros compresses perfectly, even with just zero-length encoding algorithm.
Many filesystems including ext4 lack data checksums. A more fair comparison would checksum that read, perhaps pipe through sha256sum program or whichever algorithm is in use. Although that still isn't quite the same implementation.
/dev/zero only has to copy zeros to the output buffer. No physical disks, no metadata, no checksums, just zeros written to memory.
As always, sampling what exactly is on CPU is relatively simple. On Linux, there is perf record can make some nice flamegraph visualizations from that.
ddto set the block size which you define inzfscommand and test againvmstat 1 10duringddrun.