Think of them as an abelian group with an associated ring action that works "nicely" (read: follows the distributive property). For example, all abelian groups are natural Z-modules, since they are abelian groups with a well defined Z-action (for n in Z, let ng be g added to itself n times).
It's better to think of vector spaces as upgraded modules, rather than modules as downgraded vector spaces. This is because vector spaces have a ton of structure that modules don't. For example, all vector spaces are "free" over the base field, whereas modules are not. This means that generators of a module don't neatly form a basis like in the case of vector spaces. Modules can also have torsion (for example, Z/6Z is a Z-module where every non-prime number has torsion).
For practical examples for why this may be useful, I'll broadly say that algebraic objects "acting" on other algebraic objects is in some sense the motivating principle behind all abstract algebra. Groups, for example, were (in part) created to formalize the notion of a group action on topological spaces. Similarly, "ring" actions are super useful in modern mathematics. For example, the continuous (or smooth) functions on a manifold form a ring. It may be helpful to see how this ring acts on the manifold itself, where addition of points is locally defined (since a manifold is locally Euclidean, where adding points is just adding vectors). This means that any open Euclidean patch U of a manifold M naturally has an associated ring action via the ring of smooth functions on U, so U is a module over this ring (more generally, the whole manifold has a "sheaf" of modules parameterizing these patches, and this naturally extends to all kinds of geometric objects).
Another nice example of a module is actually a vector space, though not in the way you think. Let V be a finite dimensional vector space over a fixed field k. Let T be an endomorphism on V (think of T as a matrix). k\[x\] is the ring of polynomials in one variable with coefficients in k. Any polynomial f in k\[x\] can act on vectors of V via applying the fixed endomorphism. For example, x\^2 + 1 acts on a vector v by sending v to (T\^2(v) + v), and x\^3 + 2x\^2 acts on v by sending v to T\^3(v) + 2T\^2(v). This is a linear action, so V has a k\[x\] module structure. This is useful because k\[x\] is a PID and V is finitely generated over k\[x\] so we can use the structure theorem to decompose V as a k\[x\] module. Doing this in invariant factor form yields the minimal polynomial of T, and more generally, defines the jordan canonical form of T as a matrix.