一、 問題描述
現象:硬件掉電后,oracle集群無法啟動。
[root@rac2 ~]# crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
[root@rac2 ~]# crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
二、 故障處理
查看集群組件發現ora.asm狀態為offline
root@rac2 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE OFFLINE Instance Shutdown
ora.cluster_interconnect.haip
1 ONLINE ONLINE rac2
ora.crf
1 ONLINE ONLINE rac2
ora.crsd
1 ONLINE OFFLINE
ora.cssd
1 ONLINE ONLINE rac2
ora.cssdmonitor
1 ONLINE ONLINE rac2
ora.ctssd
1 ONLINE ONLINE rac2 OBSERVER
ora.diskmon
1 OFFLINE OFFLINE
ora.drivers.acfs
1 ONLINE ONLINE rac2
ora.evmd
1 ONLINE INTERMEDIATE rac2
ora.gipcd
1 ONLINE ONLINE rac2
ora.gpnpd
1 ONLINE ONLINE rac2
ora.mdnsd
1 ONLINE ONLINE rac2
查看grid alert日志發現磁盤組沒有mount
[ohasd(4329)]CRS-2769:Unable to failover resource 'ora.diskmon'.
2018-05-08 04:12:24.940:
[cssd(4576)]CRS-1707:Lease acquisition for node rac2 number 2 completed
2018-05-08 04:12:26.188:
[cssd(4576)]CRS-1605:CSSD voting file is online: /dev/asmdisk/oraasm-OCR_0000; details in /u01/app/11.2.0/grid/log/rac2/cssd/ocssd.log.
2018-05-08 04:12:28.723:
[cssd(4576)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
2018-05-08 04:12:30.617:
[ctssd(4660)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
2018-05-08 04:12:30.617:
[ctssd(4660)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
2018-05-08 04:12:32.348:
[ohasd(4329)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2018-05-08 04:12:32.348:
[ohasd(4329)]CRS-2769:Unable to failover resource 'ora.diskmon'.
查看asm_alert,出現ORA-00600 [kfrValAcd30]的報錯
NOTE: GMON heartbeating for grp 2
GMON querying group 2 at 6 for pid 23, osid 5727
NOTE: cache opening disk 0 of grp 2: DATA_0000 path:/dev/asmdisk/oraasm-ASM_0000
NOTE: F1X0 found on disk 0 au 2 fcn 0.0
NOTE: cache opening disk 1 of grp 2: DATA_0001 path:/dev/asmdisk/oraasm-ASM_0001
NOTE: F1X0 found on disk 1 au 2 fcn 0.0
NOTE: cache opening disk 2 of grp 2: DATA_0002 path:/dev/asmdisk/oraasm-ASM_0002
NOTE: F1X0 found on disk 2 au 2 fcn 0.0
NOTE: cache opening disk 3 of grp 2: DATA_0003 path:/dev/asmdisk/oraasm-ASM_0003
NOTE: cache mounting (first) normal redundancy group 2/0x877A96CD (DATA)
* allocate domain 2, invalid = TRUE
NOTE: attached to recovery domain 2
NOTE: starting recovery of thread=1 ckpt=8.390 group=2 (DATA)
Errors in file /u01/app/grid/diag/asm/+asm/+ASM2/trace/+ASM2_ora_5727.trc (incident=50111):
ORA-00600: internal error code, arguments: [kfrValAcd30], [DATA], [1], [8], [390], [9], [390], [], [], [], [], []
ORA-15017: diskgroup "ASM" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "ASM"
Incident details in: /u01/app/grid/diag/asm/+asm/+ASM2/incident/incdir_50111/+ASM2_ora_5727_i50111.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Errors in file /u01/app/grid/diag/asm/+asm/+ASM2/trace/+ASM2_ora_5727.trc:
ORA-00600: internal error code, arguments: [kfrValAcd30], [DATA], [1], [8], [390], [9], [390], [], [], [], [], []
ORA-15017: diskgroup "ASM" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "ASM"
Abort recovery for domain 2
NOTE: crash recovery signalled OER-600
ERROR: ORA-600 signalled during mount of diskgroup DATA
NOTE: cache dismounting (clean) group 2/0x877A96CD (DATA)
NOTE: messaging CKPT to quiesce pins Unix process pid: 5727, image: oracle@rac2 (TNS V1-V3)
查看ASM2_ora_5727_i50111.trc
kfdp_query(DATA): 6
----- Abridged Call Stack Trace -----
ksedsts()+465<-kfdp_query()+530<-kfdPstSyncPriv()+585<-kfgFinalizeMount()+1630<-kfgscFinalize()+1433<-kfgForEachKfgsc()+285<-kfgsoFinalize()+135<-kfgFinalize()+398<-kfxdrvMount()+5558<-kfxdrvEntry()+2207<-opiexe()+20624<-opiosq0()+3932<-kpooprx()+274<-kpoal8()+842
<-opiodr()+917<-ttcpip()+2183<-opitsk()+1710<-opiino()+969<-opiodr()+917<-opidrv()+570<-sou2o()+103<-opimai_real()+133<-ssthrdmain()+265<-main()+201<-__libc_start_main()+253
----- End of Abridged Call Stack Trace -----
2018-05-17 21:54:44.321623 : Start recovery for domain=2, valid=0, flags=0x4
NOTE: starting recovery of thread=1 ckpt=8.390 group=2 (DATA)
Incident 50111 created, dump file: /u01/app/grid/diag/asm/+asm/+ASM2/incident/incdir_50111/+ASM2_ora_5727_i50111.trc
ORA-00600: internal error code, arguments: [kfrValAcd30], [DATA], [1], [8], [390], [9], [390], [], [], [], [], []
ORA-15017: diskgroup "ASM" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "ASM"
recovery of group DATA failed due to the following error(s):
ORA-00600: internal error code, arguments: [kfrValAcd30], [DATA], [1], [8], [390], [9], [390], [], [], [], [], []
ORA-15017: diskgroup "ASM" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "ASM"
2018-05-17 21:54:45.016907 : Abort recovery for domain 2, flags = 0x4
2018-05-17 21:54:45.017799 : kjb_abort_recovery: domain flags=0x0, valid=0
NOTE: messaging CKPT to quiesce pins Unix process pid: 5727, image: oracle@rac2 (TNS V1-V3)
kfdp_dismount(): 7
----- Abridged Call Stack Trace -----
ksedsts()+465<-kfdp_dismountInt()+389<-kfdp_dismount()+11<-kfgTermCache()+347<-kfgRecoverDismount()+373<-kfgRecoverMount()+277<-kfgscDelete()+2742<-kssdel()+155<-kfgscFinalize()+1211<-kfgForEachKfgsc()+285<-kfgsoFinalize()+135<-kfgFinalize()+398<-kfxdrvMount()+5558
<-kfxdrvEntry()+2207<-opiexe()+20624<-opiosq0()+3932<-kpooprx()+274<-kpoal8()+842<-opiodr()+917<-ttcpip()+2183<-opitsk()+1710<-opiino()+969<-opiodr()+917<-opidrv()+570<-sou2o()+103<-opimai_real()+133<-ssthrdmain()+265<-main()+201<-__libc_start_main()+253
----- End of Abridged Call Stack Trace -----
ASM name of disk 0x9f1d9488 (2:0:DATA_0000:/dev/asmdisk/oraasm-ASM_0000) is being cleared
ASM name of disk 0x9f1d9808 (2:1:DATA_0001:/dev/asmdisk/oraasm-ASM_0001) is being cleared
ASM name of disk 0x9f1d9108 (2:2:DATA_0002:/dev/asmdisk/oraasm-ASM_0002) is being cleared
ASM name of disk 0x9f1d8d88 (2:3:DATA_0003:/dev/asmdisk/oraasm-ASM_0003) is being cleared
NOTE: messaging CKPT to quiesce pins Unix process pid: 5727, image: oracle@rac2 (TNS V1-V3)
kfdp_dismount(): 8
----- Abridged Call Stack Trace -----
ksedsts()+465<-kfdp_dismountInt()+389<-kfdp_dismount()+11<-kfgTermCache()+347<-kfgRecoverDismount()+373<-kfgRecoverMount()+277<-kfgscDelete()+2742<-kssdct()+258<-kfgsoDelete()+721<-kssdel()+155<-kssdch()+6849<-ksures()+52<-opiosq0()+4721<-kpooprx()+274<-kpoal8()+842
<-opiodr()+917<-ttcpip()+2183<-opitsk()+1710<-opiino()+969<-opiodr()+917<-opidrv()+570<-sou2o()+103<-opimai_real()+133<-ssthrdmain()+265<-main()+201<-__libc_start_main()+253
----- End of Abridged Call Stack Trace -----
搜索mos也發現了幾個Bug:
Bug 27288230 - ORA-600: [kfrvalacd30] in ASM SMON (Doc ID 27288230.8)
Bug 19064132 : ORA-600 [KFRVALACD30] MOUNTING DISKGROUP AFTER RESTARTING SERVERS
ORA-600 [KFRVALACD30] in ASM (Doc ID 2123013.1)
bug在11.2.0.1/11.2.0.3/112.0.4/12.1/18.1 都存在。
Oracle認為這是存儲或者OS問題導致asm acd block的元數據不一致了
可能導致ASM元數據的主輔擴展區都發生損壞。這個損壞會導致rebalance掛起或不斷嘗試失敗,或者阻止磁盤組被掛載。
strace出現ORA 600 [kfrValAcd30]報錯,但沒發現其他可用的信息
write(25, "string_value", 12) = 12
write(25, ">", 1) = 1
write(25, "ORA 600 [kfrValAcd30]", 21) = 21
write(25, "</", 2) = 2
write(25, "string_value", 12) = 12
write(25, ">", 1) = 1
write(25, "</", 2) = 2
write(25, "pkey_dbgristih", 14) = 14
write(25, ">", 1) = 1
write(25, "\n", 1) = 1
write(25, "<", 1) = 1
write(25, "creatm_dbgristih", 16) = 16
write(25, ">", 1) = 1
write(25, "<", 1) = 1
write(25, "time", 4) = 4
write(25, ">", 1) = 1
ASM的Active Change Directory(ACD)簡單來說就相當于asm元數據重做記錄,需要說明的是,每一個asm實例,都用有其自己的ACD目錄,也就說,如果你是雙節點的rac,那么就有84m的ACD 目錄信息。
ACD主要是記錄了這樣一些數據:線程號,block號,opcode,sequences,記錄長度等信息。
當處于運行的asm實例,突然crash后,那么重啟asm實例以后,asm可以根據ACD信息去進行”instance recover”,從而保證能夠正常啟動asm實例。
如果acd的信息出現損壞,磁盤組將不能夠被mount。
使用kfed來讀取元數據,首先定位到active change directory 所在AU
[grid@rac2 ~]$ kfed read /dev/dm-0 aun=2 blkn=3|grep .xptr.au |head -1
kfffde[0].xptr.au: 4 ; 0x4a0: 0x00000004
從上面信息可以看出,ACD元數據和數據應該包含在其中一個AU 4
查看au 4的信息
grid@rac2 ~]$ kfed read /dev/dm-0 aun=4 blkn=1|more
kfbh.endian: 1 ; 0x000: 0x01
kfbh.hard: 130 ; 0x001: 0x82
kfbh.type: 8 ; 0x002: KFBTYP_CHNGDIR
kfbh.datfmt: 1 ; 0x003: 0x01
kfbh.block.blk: 1 ; 0x004: blk=1
kfbh.block.obj: 3 ; 0x008: file=3
kfbh.check: 17400326 ; 0x00c: 0x01098206
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
kfracdb.aba.seq: 2 ; 0x000: 0x00000002
kfracdb.aba.blk: 0 ; 0x004: 0x00000000
kfracdb.ents: 2 ; 0x008: 0x0002
kfracdb.ub2spare: 0 ; 0x00a: 0x0000
kfracdb.lge[0].valid: 1 ; 0x00c: V=1 B=0 M=0
kfracdb.lge[0].chgCount: 1 ; 0x00d: 0x01
kfracdb.lge[0].len: 52 ; 0x00e: 0x0034
kfracdb.lge[0].kfcn.base: 1 ; 0x010: 0x00000001
kfracdb.lge[0].kfcn.wrap: 0 ; 0x014: 0x00000000
kfracdb.lge[0].bcd[0].kfbl.blk: 0 ; 0x018: blk=0
kfracdb.lge[0].bcd[0].kfbl.obj: 4 ; 0x01c: file=4
kfracdb.lge[0].bcd[0].kfcn.base: 0 ; 0x020: 0x00000000
kfracdb.lge[0].bcd[0].kfcn.wrap: 0 ; 0x024: 0x00000000
kfracdb.lge[0].bcd[0].oplen: 4 ; 0x028: 0x0004 ---表示長度,類似logfile dump的LEN
kfracdb.lge[0].bcd[0].blkIndex: 0 ; 0x02a: 0x0000
kfracdb.lge[0].bcd[0].flags: 28 ; 0x02c: F=0 N=0 F=1 L=1 V=1 A=0 C=0
kfracdb.lge[0].bcd[0].opcode: 212 ; 0x02e: 0x00d4 --opcode,類似數據庫實例中的update/delete/insert操作的opcode編號
kfracdb.lge[0].bcd[0].kfbtyp: 9 ; 0x030: KFBTYP_COD_BGO --操作類型,類似數據庫實例中的update/delete/insert等類型
kfracdb.lge[0].bcd[0].redund: 17 ; 0x031: SCHE=0x1 NUMB=0x1 --這里表示冗余級別,17是unport,18是mirror,19表示high
kfracdb.lge[0].bcd[0].pad: 63903 ; 0x032: 0xf99f
kfracdb.lge[0].bcd[0].KFRCOD_CRASH: 1 ; 0x034: 0x00000001
kfracdb.lge[0].bcd[0].au[0]: 46 ; 0x038: 0x0000002e
kfracdb.lge[0].bcd[0].disks[0]: 0 ; 0x03c: 0x0000
kfracdb.lge[1].valid: 1 ; 0x040: V=1 B=0 M=0
kfracdb.lge[1].chgCount: 1 ; 0x041: 0x01
check等信息屬于hash值,每隔3s都會更新一次,可能是由于突然掉電,cache里的信息沒有更新到磁盤中導致。
[grid@rac2 ~]$ kfed read /dev/dm-2 aun=4 blkn=0|grep ckpt
kfracdc.ckpt.seq: 8 ; 0x018: 0x00000008
kfracdc.ckpt.blk: 390 ; 0x01c: 0x00000186
[grid@rac2 ~]$ kfed read /dev/dm-0 aun=4 blkn=0|grep ckpt
kfracdc.ckpt.seq: 70 ; 0x018: 0x00000046
kfracdc.ckpt.blk: 255 ; 0x01c: 0x000000f
三、 問題解決
通過kfed merge手工修改的方法恢復
kfed read /dev/dm-2 aun=4 blkn=0 >acd.txt
kfed merge /dev/dm-2 aun=4 blkn=0 text=acd.txt
手工mount磁盤組
SQL> alter diskgroup data mount;
alter diskgroup data mount
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [kfrValAcd30], [DATA], [1], [8],
[390], [9], [390], [], [], [], [], []
SQL> alter diskgroup data mount;
Diskgroup altered.




