I would presume that as you have heartbeat controlling failover, you are running an active/passive DRBD cluster. As such, at time of failover heartbeat on passive detects that it must promote itself to active. In this process it (usually) broadcasts the fact it's taking over the primary's VIP then mounts the DRBD disk. This makes the disk accessible to the filesystem, and finally heartbeat brings up the necessary software (MySQL, Apache etc) as per haresources.
You should add any extra services you require to start after failover to your /etc/ha.d/haresources file in the format:
#node1 10.0.0.170 Filesystem::/dev/sda1::/data1::ext2 db1 192.168.100.200/24/eth0 drbddisk::mysql Filesystem::/dev/drbd0::/drbd::ext3::defaults mysql
with the appropriate startup script in /etc/ha.d/resource.d/mysql (or named relative to the script's function!) - further details in Configuring haresources, the drbd manual and OpenVZ wiki
The crux of the matter is that there is effectively no disk for svnserve to read your repositories from until it's taken over as active, as the drbd process locks it when in passive mode. It is possible to run DRBD active/active, but it's a relatively new feature and not something I've tried!
One gotcha that's not well documented: instead of using the hb_takeover scripts to test failover, simply terminate the heartbeat service on the primary and wait for the secondary to take over, watching on both servers with tail -f /var/log/ha-log. This has the added bonus of testing the deadtime, warntime and initdead parameters of ha.cf which are all important in a real world failover.