2

My problem was that hdds of my zfs raid partly degraded and partly destroyed after a lightning.

I was able to detect the problem with zpool status:

zpool status myzfs pool myzfs state: DEGRADED (DESTROYED) 

The good news. ZFS seems to be really reliable and I in my case was able to recover the raid fully. How I recovered see below in the answer.

Recovering a ZFS Raid I learned two things:

  1. 1 failed drive brings a zpool down. An raid based on stiped mirrows however stays available. Details exlained in ZFS: You should use mirror vdevs, not RAIDZ
  2. Recovering and resilvering a zpool based on raidz2-0 takes a really long time. You may be better of with a striped mirror. This has pros and cons widely discussed in the internet sdfg
  3. A raid is NOT a backup! Offshoring backups into the cloud or a second location is an big advantage and is possible today without any big affords. Most Raids allow backups to the cloud or ZFS replication to another NAS

Orignal debug information

This however there not necessarily important to detect and solve the problem.

I have troubles with my freenas 9.2.1. It crashed today. Its running a fileserver on zfs jbod raid 2. I'm not sure what exactly causes the problems. The system is booting however reacting pretty slow. From the logs I couldn't figure anything totally wrong. Thus I'm not sure where to get startet with error analysis and how to solve them.

The problem is that the system crashes and responds pretty slow. The freenas web interface crashes as well since pyhon dies.

Freenas is installed on an usb stick, an additional drive (2tb) is attached for backup. The other 4 drives run as zfs raid.

The harddrives do show smart errors. How can I fix them? May they be the reason for the problems.

TOP

CPU: 0.1% user, 0.0% nice, 2.5% system, 0.1% interrupt, 97.3% idle Mem: 131M Active, 11G Inact, 3689M Wired, 494M Cache, 3232M Buf, 16M Free ARC: 3028K Total, 347K MFU, 1858K MRU, 16K Anon, 330K Header, 477K Other Swap: 10G Total, 636K Used, 10G Free 

DF

Filesystem Size Used Avail Capacity Mounted on /dev/ufs/FreeNASs2a 971M 866M 27M 97% / devfs 1.0k 1.0k 0B 100% /dev /dev/md0 4.8M 3.5M 918k 79% /etc /dev/md1 843k 2.6k 773k 0% /mnt /dev/md2 156M 40M 103M 28% /var /dev/ufs/FreeNASs4 20M 3.4M 15M 18% /data fink-zfs01 6.0T 249k 6.0T 0% /mnt/fink-zfs01 fink-zfs01/.system 6.0T 249k 6.0T 0% /mnt/fink-zfs01/.system fink-zfs01/.system/cores 6.0T 14M 6.0T 0% /mnt/fink-zfs01/.system/cores fink-zfs01/.system/samba4 6.0T 862k 6.0T 0% /mnt/fink-zfs01/.system/samba4 fink-zfs01/.system/syslog 6.0T 2.7M 6.0T 0% /mnt/fink-zfs01/.system/syslog fink-zfs01/shares 6.0T 261k 6.0T 0% /mnt/fink-zfs01/shares fink-zfs01/shares/fink-privat 6.4T 344G 6.0T 5% /mnt/fink-zfs01/shares/fink-privat fink-zfs01/shares/gf 6.0T 214k 6.0T 0% /mnt/fink-zfs01/shares/gf fink-zfs01/shares/kundendaten 6.6T 563G 6.0T 9% /mnt/fink-zfs01/shares/kundendaten fink-zfs01/shares/zubehoer 6.6T 539G 6.0T 8% /mnt/fink-zfs01/shares/zubehoer fink-zfs01/temp 6.2T 106G 6.0T 2% /mnt/fink-zfs01/temp /dev/ufs/Backup 1.9T 114G 1.7T 6% /mnt/Backup 

/var/log/messages

Jan 21 21:48:32 s-FreeNAS root: /etc/rc: WARNING: failed to start syslogd Jan 21 21:48:32 s-FreeNAS kernel: . Jan 21 21:48:32 s-FreeNAS root: /etc/rc: WARNING: failed to start watchdogd Jan 21 21:48:32 s-FreeNAS root: /etc/rc: WARNING: failed precmd routine for vmware_guestd Jan 21 21:48:34 s-FreeNAS ntpd[2589]: ntpd 4.2.4p5-a (1) Jan 21 21:48:34 s-FreeNAS kernel: . Jan 21 21:48:36 s-FreeNAS generate_smb4_conf.py: [common.pipesubr:58] Popen()ing: zfs list -H -o mountpoint,name Jan 21 21:48:36 s-FreeNAS generate_smb4_conf.py: [common.pipesubr:58] Popen()ing: zfs list -H -o mountpoint Jan 21 21:48:38 s-FreeNAS last message repeated 4 times Jan 21 21:48:38 s-FreeNAS generate_smb4_conf.py: [common.pipesubr:58] Popen()ing: /usr/local/bin/pdbedit -d 0 -i smbpasswd:/tmp/tmpEKKZ2A -e tdbsam:/var/etc/private/passdb.tdb -s /usr/local/etc/smb4.conf Jan 21 21:48:43 s-FreeNAS ntpd[2590]: time reset -0.194758 s Jan 21 21:48:45 s-FreeNAS smartd[2867]: Device: /dev/ada3, FAILED SMART self-check. BACK UP DATA NOW! Jan 21 21:48:45 s-FreeNAS smartd[2867]: Device: /dev/ada3, 164 Currently unreadable (pending) sectors Jan 21 21:48:45 s-FreeNAS smartd[2867]: Device: /dev/ada3, Failed SMART usage Attribute: 5 Reallocated_Sector_Ct. Jan 21 21:48:45 s-FreeNAS smartd[2867]: Device: /dev/ada3, previous self-test completed with error (unknown test element) Jan 21 21:48:51 s-FreeNAS mDNSResponder: mDNSResponder (Engineering Build) (Mar 1 2014 18:12:24) starting Jan 21 21:48:51 s-FreeNAS mDNSResponder: 8: Listening for incoming Unix Domain Socket client requests Jan 21 21:48:51 s-FreeNAS mDNSResponder: mDNS_AddDNSServer: Lock not held! mDNS_busy (0) mDNS_reentrancy (0) Jan 21 21:48:51 s-FreeNAS mDNSResponder: mDNS_AddDNSServer: Lock not held! mDNS_busy (0) mDNS_reentrancy (0) Jan 21 21:48:53 s-FreeNAS netatalk[3142]: Netatalk AFP server starting Jan 21 21:48:53 s-FreeNAS cnid_metad[3179]: CNID Server listening on localhost:4700 Jan 21 21:48:53 s-FreeNAS kernel: done. Jan 21 21:48:54 s-FreeNAS mDNSResponder: mDNS_Register_internal: ERROR!! Tried to register AuthRecord 0000000800C2FD60 s-FreeNAS.local. (Addr) that's already in the list ... Jan 21 21:48:54 s-FreeNAS mDNSResponder: mDNS_Register_internal: ERROR!! Tried to register AuthRecord 0000000800C30180 109.1.1.10.in-addr.arpa. (PTR) that's already in the list Jan 21 22:04:44 s-FreeNAS kernel: swap_pager: indefinite wait buffer: bufobj: 0, blkno: 1572950, size: 8192 ... Jan 21 22:05:25 s-FreeNAS kernel: GEOM_ELI: g_eli_read_done() failed ada0p1.eli[READ(offset=110592, length=4096)] Jan 21 22:05:25 s-FreeNAS kernel: swap_pager: I/O error - pagein failed; blkno 1572894,size 4096, error 5 Jan 21 22:05:25 s-FreeNAS kernel: vm_fault: pager read error, pid 3020 (python2.7) Jan 21 22:05:25 s-FreeNAS kernel: Failed to write core file for process python2.7 (error 14) ... Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): READ_FPDMA_QUEUED. ACB: 60 08 70 02 00 40 00 00 00 00 00 00 Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): CAM status: ATA Status Error Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): ATA status: 41 (DRDY ERR), error: 40 (UNC ) Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): RES: 41 40 70 02 00 40 00 00 00 00 00 Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): Error 5, Retries exhausted Jan 21 22:19:44 s-FreeNAS kernel: GEOM_ELI: g_eli_read_done() failed ada0p1.eli[READ(offset=253952, length=4096)] Jan 21 22:19:44 s-FreeNAS kernel: swap_pager: I/O error - pagein failed; blkno 1572929,size 4096, error 5 Jan 21 22:19:44 s-FreeNAS kernel: vm_fault: pager read error, pid 2869 (smartd) Jan 21 22:19:44 s-FreeNAS kernel: Failed to write core file for process smartd (error 14) Jan 21 22:19:44 s-FreeNAS kernel: pid 2869 (smartd), uid 0: exited on signal 11 

smartctl --scan

/dev/ada0 -d atacam # /dev/ada0, ATA device /dev/ada1 -d atacam # /dev/ada1, ATA device /dev/ada2 -d atacam # /dev/ada2, ATA device /dev/pass3 -d atacam # /dev/pass3, ATA device /dev/ada3 -d atacam # /dev/ada3, ATA device /dev/ada4 -d atacam # /dev/ada4, ATA device /dev/ada5 -d atacam # /dev/ada5, ATA device 

smartctl -a /dev/ada3

smartctl 6.2 2013-07-26 r3841 [FreeBSD 9.2-RELEASE-p3 amd64] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD4000F9YZ-09N20L0 Serial Number: WD-WMC1F1211607 LU WWN Device Id: 5 0014ee 0ae5c0b4c Firmware Version: 01.01A01 User Capacity: 4,000,787,030,016 bytes [4.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Wed Jan 21 23:07:55 2015 CET SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: FAILED! See vendor-specific Attribute list for failed Attributes. General SMART Values: Offline data collection status: (0x85) Offline data collection activity was aborted by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 73) The previous self-test completed having a test element that failed and the test element that failed is not known. Total time to complete Offline data collection: (41640) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 451) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x70bd) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 187 187 051 Pre-fail Always - 553 3 Spin_Up_Time 0x0027 142 138 021 Pre-fail Always - 11900 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 93 5 Reallocated_Sector_Ct 0x0033 139 139 140 Pre-fail Always FAILING_NOW 1791 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7553 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 93 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 59 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 35 194 Temperature_Celsius 0x0022 108 098 000 Old_age Always - 44 196 Reallocated_Event_Count 0x0032 001 001 000 Old_age Always - 353 197 Current_Pending_Sector 0x0032 200 199 000 Old_age Always - 162 198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed: unknown failure 90% 7553 - # 2 Short offline Completed: unknown failure 90% 7552 - # 3 Short offline Completed: unknown failure 90% 7551 - # 4 Short offline Completed: unknown failure 90% 7550 - # 5 Short offline Completed: unknown failure 90% 7549 - # 6 Short offline Completed: unknown failure 90% 7548 - # 7 Short offline Completed: unknown failure 90% 7547 - # 8 Short offline Completed: unknown failure 90% 7546 - # 9 Short offline Completed: unknown failure 90% 7545 - #10 Short offline Completed: unknown failure 90% 7544 - #11 Short offline Completed: unknown failure 90% 7543 - #12 Short offline Completed: unknown failure 90% 7542 - #13 Short offline Completed without error 00% 7541 - #14 Short offline Completed without error 00% 7540 - #15 Short offline Completed: read failure 10% 7538 1148054536 #16 Short offline Completed: read failure 10% 7538 1148054536 #17 Short offline Completed: read failure 10% 7536 1148057328 #18 Short offline Completed: read failure 10% 7535 1148057328 #19 Short offline Completed without error 00% 7530 - #20 Short offline Completed without error 00% 7529 - #21 Short offline Completed: read failure 10% 7528 1148057328 SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. 

2 Answers 2

2

FreeBSD crashes because of this error:

an 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): READ_FPDMA_QUEUED. ACB: 60 08 70 02 00 40 00 00 00 00 00 00 Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): CAM status: ATA Status Error Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): ATA status: 41 (DRDY ERR), error: 40 (UNC ) Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): RES: 41 40 70 02 00 40 00 00 00 00 00 Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): Error 5, Retries exhausted Jan 21 22:19:44 s-FreeNAS kernel: GEOM_ELI: g_eli_read_done() failed ada0p1.eli[READ(offset=253952, length=4096)] Jan 21 22:19:44 s-FreeNAS kernel: swap_pager: I/O error - pagein failed; blkno 1572929,size 4096, error 5

This means either a bad SATA cable, or - given you have another dying disk, ada3 - might suggest a power supply problem. It's a system disk or a swap space, and because it's just plain UFS, with no redundancy, there is no way for the system to cope with this.

As for ZFS - please post zpool status output.

1
  • thx for your help, somehow oversaw your answer but found out myself that the zpool was degraded. I luckily was able to recover my zfs zpool. Going with zpool status and than recovering was the right way. Commented Aug 30, 2016 at 9:34
0

DISCLAIMER:

Repairing a destroyed ZFS Pool and replacing degraded or unavailable process is dangerous. For me it worked. However read all documentations carefully and read about typical failures. Otherwise you may destroy your raid fully. You may'd like to contact a professional helping you recovering your data.

However reading carefully you should be able to recover your zfs raid yourself!

Situation

Further research showed that we had current peaks from an lightning which destroyed the hard drives and corrupted others. Thus the pool zfs pool was defect.

This can be detected using zpool status:

zpool status myzfs pool myzfs state: DEGRADED (DESTROYED) 

Recovering destroyed ZFS Storage Pool

This problem may be (at least for me) be solved through:

zpool destroy myzfs zpool import -Df # this made the zpool accessible again 

zpool however continued to be degraded in reason of 1 drive being fully destroyed.

Full documentation on recovering destroyed zfs pools see oracle documentation on recovering destroyed zfs storage pools

Replacing degraded ZFS drive

Recovering the degraded zpool did not fully solve the problem since it still had degraded/defect drives

zpool -status myzfs config:

 NAME STATE READ WRITE CKSUM myzfs DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 gptid/uuid1 ONLINE 0 0 0 gptid/uuid2 ONLINE 0 0 0 gptid/uuid3 ONLINE 0 0 0 778923478919345 UNAVAIL 0 0 0 /was /dev/ada4 

Drive replacement is also documented very well in the internet. However It depends a little bit if you are using a hotspare or not, the raid level, ...

Basically it worked for me pretty simple though

zpool replace myzfs 778923478919345 

The replacement is also documented very well by oracle under Replacing a Device in a ZFS Storage Pool

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.