VSZ - AWS - Multiple Interfaces
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-23-2020 10:36 AM
Hi,
has anyone successfully created a vSZ instance in AWS using multiple interfaces for the different control planes ? (Management/AP Control/Cluster) like is possible on a traditional hypervisor setup?
The Ruckus vSZ AWS documentation does not mention this and from my own testing so far, vSZ does not detect when the instance has multiple interfaces (unlike on normal hypervisor platforms)
Ruckus support seem to think there should be no limitation, which I disagree with based on my own testing and I have fed this back to them.
Ive raised this with our SE and internal discussions seem to suggest that this isnt possible in AWS however Ive not had 100% clarification on this so far and I dont think anyone (at least that Ive spoken to) knows for sure.
Thought id post on here incase anyone else in the community has managed to get this working in AWS? (specifically using multiple vNICs - single interface works fine)
This seems like quite a big drawback on Ruckus's part and absolutely should be possible if they wish to support it being ran on cloud infrastructure like AWS.
Thanks
has anyone successfully created a vSZ instance in AWS using multiple interfaces for the different control planes ? (Management/AP Control/Cluster) like is possible on a traditional hypervisor setup?
The Ruckus vSZ AWS documentation does not mention this and from my own testing so far, vSZ does not detect when the instance has multiple interfaces (unlike on normal hypervisor platforms)
Ruckus support seem to think there should be no limitation, which I disagree with based on my own testing and I have fed this back to them.
Ive raised this with our SE and internal discussions seem to suggest that this isnt possible in AWS however Ive not had 100% clarification on this so far and I dont think anyone (at least that Ive spoken to) knows for sure.
Thought id post on here incase anyone else in the community has managed to get this working in AWS? (specifically using multiple vNICs - single interface works fine)
This seems like quite a big drawback on Ruckus's part and absolutely should be possible if they wish to support it being ran on cloud infrastructure like AWS.
Thanks
9 REPLIES 9
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-23-2020 10:48 AM
Howdy! I have three in AWS, even with close to 1k APs and several hundred switches have we never come close to maxing out a single interface. We have several Radius servers, Cloudpath, Twillo, Hotspot services and DPSK running in and out of the cluster. For me, I've never even once thought about this or experienced this particular setup as an issue
Our VZ AWS cluster works similarly to our multi-interface VZ clusters we host in our data center.
I'll have to run some reports and see what the actual values are.
Our VZ AWS cluster works similarly to our multi-interface VZ clusters we host in our data center.
I'll have to run some reports and see what the actual values are.
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-23-2020 10:57 AM
Just checked the interface on a single one of my AWS instances (which looks to be hosting 130 APs) looks to be consuming +- 2mbps sustained over the last 24 hours. System>Cluster>

Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-23-2020 11:03 AM
Hi Andrew,
yeah I could likely get it working with a single interface however this is mainly from a logical perspective rather than a capacity perspective, being restricted to a single interface heavily limits the flexibility of running vsz in the cloud.
Ive got around 50 ZD5K's worth of APs that I want to shift into AWS VSZ clusters. Separating out the planes to different interfaces means I could apply for example different security groups to the different planes, with for example the management plane accessible in a separate VPC which has multiple VPN endpoints back into our core network and completely segregated from the AP control plane which would be public facing so that our APs can talk back to it.
Also means I dont need to use a public IP address for every node in the cluster. I could put a network load balancer infront of the cluster with a single public IP on there and use that as the NAT control IP for every node in the custer.
Its workable using a single interface, but doesnt scale particularly well.
Cheers,
Jamie
yeah I could likely get it working with a single interface however this is mainly from a logical perspective rather than a capacity perspective, being restricted to a single interface heavily limits the flexibility of running vsz in the cloud.
Ive got around 50 ZD5K's worth of APs that I want to shift into AWS VSZ clusters. Separating out the planes to different interfaces means I could apply for example different security groups to the different planes, with for example the management plane accessible in a separate VPC which has multiple VPN endpoints back into our core network and completely segregated from the AP control plane which would be public facing so that our APs can talk back to it.
Also means I dont need to use a public IP address for every node in the cluster. I could put a network load balancer infront of the cluster with a single public IP on there and use that as the NAT control IP for every node in the custer.
Its workable using a single interface, but doesnt scale particularly well.
Cheers,
Jamie
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-23-2020 11:08 AM
Sounds compelling.
Best of luck to you!
Best of luck to you!

