I think some clarification is in order: definite integrals calculate said limit because that's what they are defined to do. The definite integral from a to b of the function is *not* F(b)-F(a). At least not according to the definition of the definite integral. By definition, this integral is the mentioned limit of the sum of f(x)∆x.
So your real question isn't why definite integrals calculate that weird limit (they do because that's what we say they do). The question is why antiderivatives can be used to calculate definite integrals. And the answers is that this is a very deep theorem that really is the underpinning of integral calculus: the fundamental theorem of calculus. You should recognize that your question is aimed at one of the core aspects of the theory, so it's a good question!
Now the theorem says (among other things) that if f has an antiderivative F and is integrable, then the integral of f from a to b evaluates to F(b)-F(a).
Now as to why the theorem holds intuitively: If F is an antiderivative if f, then f is the rate of change of F. So f(x)∆x is an approximation of how much F changes on an interval of length ∆x. It's only an approximation because we're assuming that f(x) is constant on this interval. Now if we add up all these approximate little changes, we get an approximate total change of F over the whole interval from a to b. The true total change, however, is F(b)-F(a). The idea is that if we make the approximations of the little changes more accurate, the approximation of the total change will also become more accurate, and in the limit will be equal to the true total change. And how do we make the little approximations more accurate? By making the intervals ∆x smaller. So in the limit as the interval lengths go to 0, the approximation of total change becomes perfectly accurate. But this limit is nothing other than the definite integral, so the definite integral is equal to the total change F(b)-F(a).