I wonder if talking about a metaphor for independence would help.
If we started with two "axioms", X and Y, and we visualize them as vectors in the plane (and specifically we're focusing on the the integral vectors, like graph paper), the unit vector along the positive X axis and the unit vector along the positive Y axis, and we derive all possible "conclusions" from X and Y by summing them, and summing their sums and so on, then we get X + X, X + Y, Y + Y, and so on. A whole lattice of "conclusion" vectors comes into existence in the upper right hand plane. This operation is taking the "closure" of the axioms with respect to the operation of summing.
Some vectors, like X + Y, are not in the original axiom set, but they are in the closure, and so adding them to the axiom set doesn't change the closure. They're consistent, or admissible.
Some vectors, like -(X + Y), are both not in the original axiom set, and also, if you add them to the axiom set, then suddenly the closure expands to be a particular, familiar, boring, "everything", pattern. These are called "inconsistent".
Notice that X + Y and -(X +Y) are opposites. Furthermore, often, opposite pairs have this pattern - one is admissible, the other is inconsistent.
However, some vectors, like X - Y or Z, when you add them to the axiom set, make the closure somewhat bigger (they're inadmissible) but they don't make the closure expand to the familiar, boring, "everything", pattern (they're consistent). Also, their opposites (Y - X or -Z), are inadmissible and consistent.
These vectors are analogous to "independent" statements. Generally any time that you expand the language (talking about Z when we were only ever talking about X and Y before) then you will get independence.
If you are willing to expand your axiom set and also its closure, and you're trying to make sure a particular statement (which is not in ends up in the closure, you can always add that particular statement - then you have a one-line proof of that statement; it's an axiom. Is there another way to prove it? Well, if you imagine taking out a piece of graph paper and coloring the vectors, there's a sort of "cone" of admissibles that goes up and to the right, and another "cone" of inconsistents that goes down and to the left, and there just aren't that many remaining spots with no color near to the origin. X - Y and Y - X are the only two. You CAN break down a proof of 2X - 2Y into multiple steps using a new axiom, X - Y, but you CANNOT break down the one-step proof of X - Y in the same way. There isn't any room remaining.
So in the more complicated logical situation, I imagine high-dimensional "cones" emerging from (subsets of) the ZFC axioms, and their opposites, and they are staining a vast number of short statements either admissible or inconsistent. If you go outward far enough, towards the high-dimensional "waist", you start getting to some unstained "independent" statements.
My guess is that some of the currently discussed independent statements are also pretty close to the origin, that is, some of the shortest ones. Making a map of what this "waist" of ZFC overall looks like might yield results like "according to this metric of statement length / complexity, there are no shorter / simpler statements that could be added to ZFC to demonstrate (for example) Martin's axiom. That is, the shortest / simplest statement you can add to ZFC that gets you to a system where Martin's axiom is true is Martin's axiom itself."
Of course, what I've done here is replaced "intuitive" with "short" or "simple", which might not be what you want - regardless of whether you think that's a legitimate move, it sounds like an interesting research project!