SCI Spark Master
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-21-2018 12:40 AM
Our SCI 3.6.0 recently gave this error "Your Spark cluster (Spark master IP: x.x.x.x) was unable to accept a job submission.
I cant seem to find any information regarding this notification and how o troubleshoot, please help..
I cant seem to find any information regarding this notification and how o troubleshoot, please help..
3 REPLIES 3
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-21-2018 01:04 AM
Hi Marius, information regarding this notification is available in the User Guide. The notification will show up when a number of jobs are backlogged. The notification should go away as soon as the backlogged jobs are cleared. However, it doesn't go away after a few hours, please reach out to the customer support team. Thanks!
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-21-2018 01:24 AM
This is not clearing by itself, SCI also stopped collecting data, node status is green.
I will contact support.
I will contact support.
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-21-2018 05:41 AM
Hello Marius, you can check in the User guide because it can be something related to the disk space.
You should check the services with the "sudo docker ps" command and check if is necessary to restart the services or not. Once happened to me that the time wasn't synchronized and the SCI stop working, so I had to restart the NTP services and it start working again.
You should check the services with the "sudo docker ps" command and check if is necessary to restart the services or not. Once happened to me that the time wasn't synchronized and the SCI stop working, so I had to restart the NTP services and it start working again.

