Hi all! I'm completely lost with why Pearson's r is the same as standardized slope ( β ) in a simple bivariate regression.

I thought that the standardized slope represented the units of increase in the z-score of Y that we would expect to see with every unit of Z-score increase that we see in for X. But that seems like it would invite the possibility of slopes > 1 (i.e. for an increase of 1 std deviation of IQ score, income goes up by 1.3 std deviations).

On the other hand, Pearson's r is inherently limited to between -1 and 1. So it couldn't be 1.3. Help? Can someone explain why standardized slope has to be the same as R?
>But that seems like it would invite the possibility of slopes > 1 (i.e. for an increase of 1 std deviation of IQ score, income goes up by 1.3 std deviations).

The resolution is that this isn't actually possible. This isn't obvious; formally it follows from the Cauchy-Schwarz inequality. But maybe it's more intuitive that a larger slope of eg 1000 should be impossible: that would mean that almost everything is an outlier in the y-direction, since being eg .1SD from the mean on x would put you 100SD from the mean on y.
Beta=r(SDy/SDx)=r*1/1=r