has anyone successfully created a vSZ instance in AWS using multiple interfaces for the different control planes ? (Management/AP Control/Cluster) like is possible on a traditional hypervisor setup?
The Ruckus vSZ AWS documentation does not mention this and from my own testing so far, vSZ does not detect when the instance has multiple interfaces (unlike on normal hypervisor platforms)
Ruckus support seem to think there should be no limitation, which I disagree with based on my own testing and I have fed this back to them.
Ive raised this with our SE and internal discussions seem to suggest that this isnt possible in AWS however Ive not had 100% clarification on this so far and I dont think anyone (at least that Ive spoken to) knows for sure.
Thought id post on here incase anyone else in the community has managed to get this working in AWS? (specifically using multiple vNICs - single interface works fine)
This seems like quite a big drawback on Ruckus's part and absolutely should be possible if they wish to support it being ran on cloud infrastructure like AWS.
Howdy! I have three in AWS, even with close to 1k APs and several hundred switches have we never come close to maxing out a single interface. We have several Radius servers, Cloudpath, Twillo, Hotspot services and DPSK running in and out of the cluster. For me, I've never even once thought about this or experienced this particular setup as an issue Our VZ AWS cluster works similarly to our multi-interface VZ clusters we host in our data center. I'll have to run some reports and see what the actual values are.
yeah I could likely get it working with a single interface however this is mainly from a logical perspective rather than a capacity perspective, being restricted to a single interface heavily limits the flexibility of running vsz in the cloud.
Ive got around 50 ZD5K's worth of APs that I want to shift into AWS VSZ clusters. Separating out the planes to different interfaces means I could apply for example different security groups to the different planes, with for example the management plane accessible in a separate VPC which has multiple VPN endpoints back into our core network and completely segregated from the AP control plane which would be public facing so that our APs can talk back to it.
Also means I dont need to use a public IP address for every node in the cluster. I could put a network load balancer infront of the cluster with a single public IP on there and use that as the NAT control IP for every node in the custer.
Its workable using a single interface, but doesnt scale particularly well.