1 and 2: A cluster is a group of nodes that are functioning as one single controller. This gives us the advantages of being able to manage more APs than a single node and also redundancy and failover options as the database is replicated across all nodes in the cluster.
3. Yes. Many of our service provider customers deploy Ruckus controllers in this way.
Please let me know if you would like further detail on how this is achieved.
A Cluster allows you to join up to 4 SZ or vSZ nodes so they operate together as active/active redundancy group. All nodes actively control and monitor Access Points. If one node (vSZ/SZ) fails AP's will automatically move to another active node in less then 1 minute.
You can connect to the WEB-UI (https://:8443) of any of the nodes and the cluster and get a view of all nodes. This is known as a single pane of glass management system. All operations done on the WEB-UI on one node are executed on all nodes in the cluster. This allows the cluster to operate as one large integrated system.
Due to the shared data base between nodes it is best that all nodes in a cluster be located in the same data center/location. AP's can remotely connect to any of the nodes. Latency between nodes in the cluster can cause data base issues and operational problems.
You can build redundant clusters located in other locations and configure geo redundancy for full network backup capability. Today this acts as a 1+1 or active standby system where one cluster (multiple nodes) is in standby mode and only control AP's if the active cluster is not reachable by the Access Points
After initially connecting AP's to the system, using SZ discovery via DHCP option 43, Ruckus Registration Server, DNS or manually using the set scg ip and approving the connection, the AP's are configured with the IP address of all nodes in the cluster (C-list). The AP's will use this list to change nodes if the present node stops responding.
While a fully resourced vSZ-H can support up to 10K AP's, "space" must be left so if a node fails there are enough resources to manage the AP's. So a 2 node cluster can manage 10K AP's, a 3 node cluster up to 20K AP's, a 4 node cluster up to 30K AP's. For redundancy to work, the maximum number of AP's should be: max AP/node x (number of nodes -1)
More details on cluster operations can be found in the online operational manuals and by searching the online Knowledge base, as an example: