I think you're confusing multiple concepts, like others have said. Specifically, 802.11b uses a totally different signaling scheme (DSSS) versus 802.11g/n/ac, and there's interoperability schemes at play.
But even simplifying that further, 802.11n (and ac) defines a number of bitrates (which specifies the efficiency of transfer). Lower bitrates are easier to decode through noise and have a lower chance of being corrupted when the signal is weak. Higher bitrates are more efficient when SNR is high, but the downside is that if a packet can't be decoded by the other end, you must retransmit the whole thing (either at the same rate or a slower rate). So, there's a constant balancing game there.
Beyond that, remember that wifi is a shared spectrum. On a particular channel, only one person can be talking at once. So there's finite airspace. So imagine that there's a 1 client that can only manage to talk at 1mbit and be heard. And there's another really awesome client that can talk at 500mbit successfully. And imagine both want to talk. If the "slow" client spends the whole 1 second talking, the network is effectively operating at 1mbit during that window. On the other hand, if the fast client is talking the whole time, then the network is going at 500mbit. If the slow client spends 0.5 seconds talking and the fast client spends 0.5 seconds talking, then the network speed is effectively (0.5mbit + 250mbit) / (1 second) = 250.5mbit.
So... which client do you let spend more airtime? Obviously, letting clients with stronger signals / faster transmit abilities use the channel will increase network throughput, but at the same time, starving out the slow clients entirely is inappropriate too. They are on your network presumably because they want network connectivity! These decisions are part of the secret sauce of AP's, and you'll see a number of marketing names used to describe it (such as Airtime Fairness, Throughput Fairness, and various QoS schemes).