Hi all! I'm completely lost with why Pearson's r is the same as standardized slope ( β ) in a simple bivariate regression.
I thought that the standardized slope represented the units of increase in the z-score of Y that we would expect to see with every unit of Z-score increase that we see in for X. But that seems like it would invite the possibility of slopes > 1 (i.e. for an increase of 1 std deviation of IQ score, income goes up by 1.3 std deviations).
On the other hand, Pearson's r is inherently limited to between -1 and 1. So it couldn't be 1.3. Help? Can someone explain why standardized slope has to be the same as R?