07-02-2020 02:55 PM
07-03-2020 06:23 AM
07-03-2020 02:35 PM
07-05-2023 09:07 PM
Hi Cesar,
Yes, we can look for the actual reason from the snapshot logs > applogfile > configurer folder (search with keyword "state").
i.e.
2023-06-20 08:25:25,842 Configurer[internalScheduler-11] INFO c.r.w.c.o.ClusterBackupOperationService - <OPT> [Backup] requests all blade to take snapshot with time string [20230620082521] and set cluster state as Maintenance
2023-06-20 08:25:25,842 Configurer[internalScheduler-11] INFO c.r.w.c.d.s.ClusterStateMachine - Change cluster state from In_Service to Maintenance
2023-06-20 08:25:25,842 Configurer[internalScheduler-11] INFO c.r.w.c.d.s.ClusterStateMachine - It is the leader, update the shared info from blade [Ruckus-LAB-SmartZone100] changes :
{ClusterState=Maintenance}
2023-06-20 08:25:25,843 Configurer[internalScheduler-11] INFO c.r.w.c.d.s.ClusterStateChangedHandler - Cluster state change : NewState [Maintenance]
==========================================================================
2023-06-20 08:25:35,659 Configurer[internalScheduler-11] INFO c.r.w.c.o.ClusterBackupOperationService - <OPT> [Backup] all blade complete the snapshot, so set cluster state as In_Service
2023-06-20 08:25:35,659 Configurer[internalScheduler-11] INFO c.r.w.c.d.s.ClusterStateMachine - Change cluster state from Maintenance to In_Service
2023-06-20 08:25:35,660 Configurer[internalScheduler-11] INFO c.r.w.c.d.s.ClusterStateMachine - It is the leader, update the shared info from blade [Ruckus-LAB-SmartZone100] changes :
{ClusterState=In_Service}
2023-06-20 08:25:35,660 Configurer[internalScheduler-11] INFO c.r.w.c.d.s.ClusterStateChangedHandler - Cluster state change : NewState [In_Service]
Regards
Saurabh