ago
0 like 0 dislike
0 like 0 dislike
Hi all! I'm completely lost with why Pearson's r is the same as standardized slope ( β ) in a simple bivariate regression.

I thought that the standardized slope represented the units of increase in the z-score of Y that we would expect to see with every unit of Z-score increase that we see in for X. But that seems like it would invite the possibility of slopes > 1 (i.e. for an increase of 1 std deviation of IQ score, income goes up by 1.3 std deviations).

On the other hand, Pearson's r is inherently limited to between -1 and 1. So it couldn't be 1.3. Help? Can someone explain why standardized slope has to be the same as R?
ago
0 like 0 dislike
0 like 0 dislike
>But that seems like it would invite the possibility of slopes > 1 (i.e. for an increase of 1 std deviation of IQ score, income goes up by 1.3 std deviations).

The resolution is that this isn't actually possible. This isn't obvious; formally it follows from the Cauchy-Schwarz inequality. But maybe it's more intuitive that a larger slope of eg 1000 should be impossible: that would mean that almost everything is an outlier in the y-direction, since being eg .1SD from the mean on x would put you 100SD from the mean on y.
ago
0 like 0 dislike
0 like 0 dislike
Beta=r(SDy/SDx)=r*1/1=r
ago

No related questions found

33.4k questions

135k answers

0 comments

33.7k users

OhhAskMe is a math solving hub where high school and university students ask and answer loads of math questions, discuss the latest in math, and share their knowledge. It’s 100% free!