What makes the presence of Polynomials ubiquitous?

Polynomials are exactly what you can get by adding and multiplying known and unknown things, so they necessarily occur at everything that involves basic algebra.
I’ll add another answer: they are good for approximating more complicated formulas and are very easy to do computations with. Any reasonably smooth continuous function, for example, can be approximated near a point by a linear polynomial if you “zoom in” closely enough. You can get better approximations if you allow for higher-degree terms, such as truncating the Taylor series of the function after the degree-2 terms. Once you do this, (partial) derivatives are much faster to compute. As someone else said, you can also do linear algebra using the monomials of degree less than k as your basis (or whatever other basis you might care about)
It's the other way round: They are the building blocks because they are simple.

In mathematics (or science in general, really), we tend to express things in other, known things that are simple. This way we can build up quite complex knowledge in a Lego like fashion. Another of these ubiquitous concepts in math is linear algebra. Vectors and matrices (or more accurately linear maps) are very easy to understand and use, yet make up the basis of a lot of the advanced math like, e.g., functional analysis, which can be described as linear algebra with functions instead of vectors and matrices.
You think polynomials are ubiquitous? Try addition! That’s everywhere! And multiplication too!

But polynomials are just what you get when you combine addition and multiplication… so they’re going to show up nearly everywhere addition and multiplication do.
To put a bit fancier language on what other people have already said, polynomial rings are precisely the free algebras over a given ring. This is just like how you form free vector spaces or free modules. We say that it is a free object in the category of (commutative) R-algebras.
They're ubiquitous for 2 reasons really.

One being because they naturally describe the things we're interested in like projectile motion, areas, volumes, etc. The second being because we've found they possess a boatload of nice properties such as being able to approximate functions nicely, they form a ring and/or vector space, etc.

So even if the question you're studying has nothing to do with polynomials, somebody has probably rephrased it into something about polynomials anyways.
If members of set of numbers are atoms, and addition and multiplication are covalent and ionic bonds, then polynomials are molecules and compounds. If you take some element x and add and multiply it by stuff over and over again, including adding and multiplying it with itself, you get polynomials. It is perhaps the simplest nontrivial object obtained by adding and multiplying. Given their simplicity, it is arguably very interesting and exciting that large classes of functions can be well approximated by these simple objects.
This is maybe not related to your question but I've been banging my head on this research problem since 2019. This week's attempt involves showing the inequality \int_I |f(x)| dx > c |I| \sup_I |f(x)| for some constant c>0, as f ranges in some family of functions and for all subintervals I = [a,b] of the unit interval (say).

If f ranges in the family P of polynomials of degree n, then the inequality holds. This is because it holds when I = [0,1] by norm equivalence in finite dimensions, and then it holds for all subintervals I = [a,b] because the change of variable g(x) = f((x-a)/(b-a)) produces a g(x) that is also in P.

Anyway, nevermind. :)
Surprised no one has mentioned that polynomials are used to describe the height versus time graphs of falling objects, and to describe the height versus time graphs of objects that are subject to linear acceleration (i.e. objects whose speed increases by the same amount every second).

This is because if the nth derivative of a function f is constant, then f is a degree n polynomial. (And this is because the antiderivative of a degree n polynomial is a degree n + 1 polynomial. (Also, the derivative of a degree n polynomial is a degree n - 1 polynomial.))
They are very, very easy to do repetitive, simple calculations with. It's basically just addition/subtraction and multiplication. This combined with today's computers and the Taylor expansion makes it relatively effortless to get really good approximation for basically any function you can think of

0 like 0 dislike