[linux] Chybny disk v RAID - ako najst pricinu resp. chybny sektor

Matus Horvath matus na mujmail.cz
Čtvrtek Květen 7 11:36:49 CEST 2009


patrik na foral.sk wrote:
> Ahoj,
> Mam problem s diskom v RAID1, padol mi do degradovaneho rezimu, no 
> neviem ako overit ci je to zrele na reklamaciu disku. Nasiel som, ze by 
> to malo ist prikazom fschk alebo e2fsck, no chce to single user mod. 
> Neda sa to spravit za chodu systemu? Predpokladam, ze bude treba 
> particiu odstranit z diskoveho pola, primontovat len na citanie a 
> spustit kontrolu.
> 
> Zalohu a nahradny disk samozrejme mam, no neviem posudit, ci je disk 
> naozaj chybny a teda zrely na reklamaciu (vymenu). Pri znovupridani 
> disku do pola sa to zosynchronizovalo a frci to zda sa bezproblemovo - 
> vid vypis nizsie. Dik za info.
> 
> ---------------------8<------------------
>  > mdadm --detail /dev/md0
> /dev/md0:
>         Version : 00.90.03
>   Creation Time : Tue Dec  2 11:20:35 2008
>      Raid Level : raid1
>      Array Size : 4192832 (4.00 GiB 4.29 GB)
>     Device Size : 4192832 (4.00 GiB 4.29 GB)
>    Raid Devices : 2
>   Total Devices : 2
> Preferred Minor : 0
>     Persistence : Superblock is persistent
> 
>     Update Time : Thu May  7 10:15:39 2009
>           State : active, degraded
>  Active Devices : 1
> Working Devices : 1
>  Failed Devices : 1
>   Spare Devices : 0
> 
>            UUID : 7e264bc6:cbd39c1f:c230666b:5103eba0
>          Events : 0.521971
> 
>     Number   Major   Minor   RaidDevice State
>        0       0        0        0      removed
>        1       8        1        1      active sync   /dev/sda1
> 
>        2       8       17        -      faulty spare   /dev/sdb1
>  > mdadm /dev/md0 -f /dev/sdb1
> mdadm: set /dev/sdb1 faulty in /dev/md0
>  > mdadm --detail /dev/md0
> /dev/md0:
>         Version : 00.90.03
>   Creation Time : Tue Dec  2 11:20:35 2008
>      Raid Level : raid1
>      Array Size : 4192832 (4.00 GiB 4.29 GB)
>     Device Size : 4192832 (4.00 GiB 4.29 GB)
>    Raid Devices : 2
>   Total Devices : 2
> Preferred Minor : 0
>     Persistence : Superblock is persistent
> 
>     Update Time : Thu May  7 10:31:32 2009
>           State : clean, degraded
>  Active Devices : 1
> Working Devices : 1
>  Failed Devices : 1
>   Spare Devices : 0
> 
>            UUID : 7e264bc6:cbd39c1f:c230666b:5103eba0
>          Events : 0.522314
> 
>     Number   Major   Minor   RaidDevice State
>        0       0        0        0      removed
>        1       8        1        1      active sync   /dev/sda1
> 
>        2       8       17        -      faulty spare   /dev/sdb1
>  > mdadm /dev/md0 -r /dev/sdb1
> mdadm: hot removed /dev/sdb1
>  > mdadm --detail /dev/md0
> /dev/md0:
>         Version : 00.90.03
>   Creation Time : Tue Dec  2 11:20:35 2008
>      Raid Level : raid1
>      Array Size : 4192832 (4.00 GiB 4.29 GB)
>     Device Size : 4192832 (4.00 GiB 4.29 GB)
>    Raid Devices : 2
>   Total Devices : 1
> Preferred Minor : 0
>     Persistence : Superblock is persistent
> 
>     Update Time : Thu May  7 10:32:23 2009
>           State : active, degraded
>  Active Devices : 1
> Working Devices : 1
>  Failed Devices : 0
>   Spare Devices : 0
> 
>            UUID : 7e264bc6:cbd39c1f:c230666b:5103eba0
>          Events : 0.522341
> 
>     Number   Major   Minor   RaidDevice State
>        0       0        0        0      removed
>        1       8        1        1      active sync   /dev/sda1
>  > mdadm /dev/md0 -a /dev/sdb1
> mdadm: re-added /dev/sdb1
>  > mdadm --detail /dev/md0
> /dev/md0:
>         Version : 00.90.03
>   Creation Time : Tue Dec  2 11:20:35 2008
>      Raid Level : raid1
>      Array Size : 4192832 (4.00 GiB 4.29 GB)
>     Device Size : 4192832 (4.00 GiB 4.29 GB)
>    Raid Devices : 2
>   Total Devices : 2
> Preferred Minor : 0
>     Persistence : Superblock is persistent
> 
>     Update Time : Thu May  7 10:34:23 2009
>           State : active
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
> 
>            UUID : 7e264bc6:cbd39c1f:c230666b:5103eba0
>          Events : 0.522393
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       17        0      active sync   /dev/sdb1
>        1       8        1        1      active sync   /dev/sda1
> 
> 

Ahoj,

skus na to pustit smartctl (smartctl -a /dev/sdb), ten ti povie co si o 
svojom stave mysli samotny disk. Pripadne ak mas este odlozene logy z 
doby ked to zacalo blbnut, pozri sa ci nenajdes nejaku hlasku od kernelu 
ktora ti objasni preco sa RAID degradoval.

zdravim,

Matus


Další informace o konferenci linux