One (maybe not the most intuitive) way to see why fraction multiplication has to be defined this way is to derive it from basic properties of multiplication and division.

Let's say you have two fractions, written x = a/b and y = c/d, that you wish to multiply.

Doing this directly would need fraction multiplication, so let's assume we don't yet know how to do it. Instead, let's just try to do everything by multiplication.

We can restructure the definitions of x and y so that there are no fractions involved: x\*b = a and y\*d = c. This is like saying 7 = 14/2 is equivalent to 7\*2 = 14, agree so far?

Now we can just multiply the left hand sides and right hand sides to get (x\*b)\*(y\*d) = a\*c.

By the standard rules of multiplication we can try to get out x\*y, that is, the fractions that we wanted to multiply:

(x\*y)\*(b\*d) = a\*c

divide both sides by b\*d:

x\*y = (a\*c)/(b\*d)

that looks familiar, doesn't it?

This looks like x\*y can be written as a fraction, with a\*c as numerator and b\*d as denominator.