Issue :
We often come across a scenario in which MRP0 process stop applying logs and the status of it will be WAIT_FOR_LOG.
Workaround :
Check LNS on Primary Database: Writing
SQL> select inst_id, process, status, thread#, sequence#, block#, blocks from gv$managed_standby where process in ('RFS','LNS','MRP0');
INST_ID PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
---------- --------- ------------ ---------- ---------- ---------- ----------
2 LNS WRITING 2 74 17873 1
1 LNS WRITING 1 79 24255 1
LNS Process :
In Data Guard, LNS process performs actual network I/O and waits for each network I/O to complete.
The NET_TIMEOUT attribute is used only when the LGWR process transmits redo data using a LGWR Network Server(LNS) process.
Check MRP0 on Standby Database: WAIT_FOR_LOG
SQL> select inst_id, process, status, thread#, sequence#, block#, blocks from gv$managed_standby where process in ('RFS','LNS','MRP0');
INST_ID PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
2 RFS IDLE 0 0 0 0
2 RFS IDLE 0 0 0 0
2 RFS IDLE 0 0 0 0
2 RFS IDLE 1 79 25361 1
2 RFS IDLE 0 0 0 0
2 RFS IDLE 0 0 0 0
2 MRP0 WAIT_FOR_LOG 2 74 0 0
Check Broker Status
Check the apply lag by the broker.
Database - standb
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds
Apply Lag: 2 hour(s) 55 minutes 47 seconds
Real Time Query: OFF
Instance(s):
standb1
standb2 (apply instance)
Database Status:
SUCCESS
Apply Lag: We can clearly see the difference of 2 hour(s) 55 minutes 47 seconds
Re-enable Standby Database
DGMGRL> enable database standb;
Enabled.
Now check the status again.
SQL> select inst_id, process, status, thread#, sequence#, block#, blocks from gv$managed_standby where process in ('RFS','LNS','MRP0');
INST_ID PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
---------- --------- ------------ ---------- ---------- ---------- ----------
1 RFS RECEIVING 0 0 0 0
2 RFS IDLE 0 0 0 0
2 RFS IDLE 0 0 0 0
2 RFS IDLE 0 0 0 0
2 RFS IDLE 1 80 58 1
2 RFS IDLE 0 0 0 0
2 RFS IDLE 0 0 0 0
2 MRP0 APPLYING_LOG 1 79 24581 25582
Now it’s back to normal. APPLYING_LOG is the normal status for real-time applying service.
We often come across a scenario in which MRP0 process stop applying logs and the status of it will be WAIT_FOR_LOG.
Workaround :
Check LNS on Primary Database: Writing
SQL> select inst_id, process, status, thread#, sequence#, block#, blocks from gv$managed_standby where process in ('RFS','LNS','MRP0');
INST_ID PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
---------- --------- ------------ ---------- ---------- ---------- ----------
2 LNS WRITING 2 74 17873 1
1 LNS WRITING 1 79 24255 1
LNS Process :
In Data Guard, LNS process performs actual network I/O and waits for each network I/O to complete.
The NET_TIMEOUT attribute is used only when the LGWR process transmits redo data using a LGWR Network Server(LNS) process.
Check MRP0 on Standby Database: WAIT_FOR_LOG
SQL> select inst_id, process, status, thread#, sequence#, block#, blocks from gv$managed_standby where process in ('RFS','LNS','MRP0');
INST_ID PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
2 RFS IDLE 0 0 0 0
2 RFS IDLE 0 0 0 0
2 RFS IDLE 0 0 0 0
2 RFS IDLE 1 79 25361 1
2 RFS IDLE 0 0 0 0
2 RFS IDLE 0 0 0 0
2 MRP0 WAIT_FOR_LOG 2 74 0 0
Check Broker Status
Check the apply lag by the broker.
Database - standb
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds
Apply Lag: 2 hour(s) 55 minutes 47 seconds
Real Time Query: OFF
Instance(s):
standb1
standb2 (apply instance)
Database Status:
SUCCESS
Apply Lag: We can clearly see the difference of 2 hour(s) 55 minutes 47 seconds
Re-enable Standby Database
DGMGRL> enable database standb;
Enabled.
Now check the status again.
SQL> select inst_id, process, status, thread#, sequence#, block#, blocks from gv$managed_standby where process in ('RFS','LNS','MRP0');
INST_ID PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
---------- --------- ------------ ---------- ---------- ---------- ----------
1 RFS RECEIVING 0 0 0 0
2 RFS IDLE 0 0 0 0
2 RFS IDLE 0 0 0 0
2 RFS IDLE 0 0 0 0
2 RFS IDLE 1 80 58 1
2 RFS IDLE 0 0 0 0
2 RFS IDLE 0 0 0 0
2 MRP0 APPLYING_LOG 1 79 24581 25582
Now it’s back to normal. APPLYING_LOG is the normal status for real-time applying service.