adamant's blog

By adamant, history, 20 months ago, In English

Hi everyone!

This blog is inspired by some contestant solutions to 104234F - Palindromic Polynomial. In short, problem asks you to find a polynomial such that $$$A(x_i) = y_i$$$ for several given pairs $$$(x_i, y_i)$$$, and the coefficients of $$$A(x)$$$ read the same left-to-right and right-to-left (i.e., they form a palindrome). The intended solution utilizes the fact that for such polynomial, it always holds that

$$$ A(x) = x^n A(x^{-1}), $$$

as $$$x^n A(x^{-1})$$$ is exactly the polynomial $$$A(x)$$$ with reversed coefficients (assuming the degree is $$$n$$$).

Representing palindromic polynomials with polynomials in $$$x+\frac{1}{x}$$$

Several solutions from contestants, however, relied on a different criterion for $$$A(x)$$$. Rather than using the identity above, participants found out that palindromic polynomials can always be represented as

$$$ A(x) = x^{d} B\left(x+\frac{1}{x}\right), $$$

when its degree $$$2d$$$ is even, or as

$$$ A(x) = x^{d} (1+x) B\left(x+\frac{1}{x}\right), $$$

when its degree $$$2d+1$$$ is odd. In both cases, $$$B(x)$$$ is a polynomial of degree $$$d$$$. This representation allows for much simpler solution, but in this blog we will talk about why it exists in the first place, rather than how to use it to solve the problem.

Thanks to ecnerwala for explaining me this approach in more detail!

Getting rid of odd degrees

First things first, we should notice that a palindromic polynomial $$$A(x)$$$ of odd degree always has a root $$$x=-1$$$, as it is an alternating sum of the polynomial's coefficients, and coefficients from the right half go into this sum with different sign from corresponding coefficients in the left half. Then, dividing $$$A(x)$$$ by $$$1+x$$$ we get a palindromic polynomial of degree $$$2d$$$ instead of $$$2d+1$$$.

This allows us to focus on the polynomials of even degree.

Representing polynomials of even degrees with $$$x^k + \frac{1}{x^k}$$$

First of all we may notice that palindromic polynomials of even degree $$$2d$$$ can always be represented as

$$$ A(x) = \sum\limits_{i=0}^d a_i (x^{d-i} + x^{d+i}) = x^d \sum\limits_{i=0}^d a_i \left(x^i + \frac{1}{x^i}\right) $$$

for some sequence $$$a_0, a_1, \dots, a_d$$$ which is easily derived from the coefficients of $$$A(x)$$$. Now we should show that the sum

$$$ \sum\limits_{i=0}^d a_i \left(x^i + \frac{1}{x^i}\right) $$$

can be rewritten as

$$$ B\left(x+\frac{1}{x}\right) = \sum\limits_{i=0}^d b_i \left(x+\frac{1}{x}\right)^i $$$

with some sequence $$$b_0, b_1, \dots, b_d$$$.

Changing basis from $$$x^k + \frac{1}{x^k}$$$ to $$$\left(x+\frac{1}{x} \right)^k$$$

Consider the sequence of Laurent polynomials (i.e. polynomials, in which negative degrees are allowed)

$$$ A_k = \left(x+\frac{1}{x}\right)^k, $$$

and another sequence of Laurent polynomials

$$$ B_k = x^k + \frac{1}{x^k}. $$$

As it turns out, these two sequences have the same linear span. What it practically means, is that every element of $$$B_k$$$ may be represented as a finite linear combination of the elements of $$$A_k$$$, and vice versa. It is somewhat trivial in one direction:

$$$ A_k = \sum\limits_{i=0}^k \binom{k}{i} x^{k-2i} = \sum\limits_{i=0}^{\lfloor k/2 \rfloor} \binom{k}{i} \left(x^{k-2i}+\frac{1}{x^{k-2i}}\right). $$$

With some special consideration for even $$$k$$$, as the middle coefficient should be additionally divided by $$$2$$$. You can represent it with lower triangular matrix with $$$1$$$ on the diagonal (except for the first row where it's $$$\frac{1}{2}$$$), which would mean that the matrix is invertible and there is, indeed, the representation for $$$B_k$$$ in terms of $$$A_k$$$ as well.

But what about cosines?

We will need a bit of complex numbers magic here. The explanation above, while technically correct, doesn't really shed any light on why it is true at all. To better understand it, consider the substitution $$$x=e^{i \theta}$$$, then one can use the explicit formula for $$$\cos \theta$$$:

$$$ \cos \theta = \frac{e^{i\theta} + e^{-i\theta}}{2}, $$$

and notice that

$$$\begin{gather} A_k = 2^k \cos^k \theta, \\ B_k = 2 \cos k \theta. \end{gather}$$$

Then, it is a well known fact that $$$\cos k \theta$$$ can be represented in terms of $$$1, \cos \theta, \cos^2 \theta, \dots, \cos^k \theta$$$. To prove it, consider the identity

$$$ e^{i\theta} = \cos \theta + i \sin \theta, $$$

then on one hand

$$$ e^{ik \theta} = \cos k \theta + i \sin k \theta, $$$

on the other hand

$$$ e^{i k \theta} = (e^{i \theta})^k = (\cos \theta + i \sin \theta)^k = \sum\limits_{j=0}^k \binom{k}{j} i^j \sin^j \theta \cos^{k-j} \theta. $$$

Now, we're only interested in the real part of the said expression, hence the one with even $$$j$$$. Using $$$\sin^2 \theta = 1 - \cos^2 \theta$$$, we get

$$$ \cos k \theta = \sum\limits_{j=0}^{\lfloor k / 2 \rfloor} \binom{k}{2j} (\cos^2 \theta - 1)^j \cos^{k-2j} \theta. $$$

Hence, if we define

$$$ T_k(x) = \sum\limits_{j=0}^{\lfloor k / 2 \rfloor} \binom{k}{2j} (x^2 - 1)^j x^{k-2j}, $$$

we can see that $$$T_k (\cos x) = \cos kx$$$. The sequence of polynomials $$$T_0, T_1, T_2, \dots$$$ defined in such way is also known as Chebyshev polynomials. In the context of our problem, it would mean that

$$$ T_k\left(\frac{x+x^{-1}}{2}\right) = \frac{x^k + x^{-k}}{2}, $$$

meaning that the desired transition could be expressed as

$$$ \sum\limits_{i=0}^d a_i \left(x^i + \frac{1}{x^i}\right) = 2 \sum\limits_{i=0}^d a_i T_i\left( \frac{x+x^{-1}}{2} \right) = B\left(x+\frac{1}{x}\right). $$$
  • Vote: I like it
  • +111
  • Vote: I do not like it

»
20 months ago, # |
  Vote: I like it +103 Vote: I do not like it

Thanks! From this blog I learned that plural of cosine is cosines.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +32 Vote: I do not like it

    It's nice to know that the most important part of the blog did not go unnoticed ^.^'

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

"The explanation above, while technically correct, doesn't really shed any light on why it is true at all. " — Idk, for me the trivial induction sheds much more light on this than any cosines will ever do :P

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I kind of agree, I think the trivial indiction sheds more light on Chebyshev polynomials than the other way around.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I guess what I mean there is that cosines showcase how this result is connected to some kind of grander scheme of things, rather than just being isolated fact on its own.

»
20 months ago, # |
  Vote: I like it +8 Vote: I do not like it

Btw I think that one of the coolest (and most known) Chebyshev's polynomials applications is the following problem:

What is the highest leading coefficient of a polynomial $$$P$$$ of degree n such that $$$|P(x)| \le 1$$$ for each $$$|x| \le 1$$$?

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it
    Solution?
    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it +9 Vote: I do not like it

      I'll just give the full solution here.

      As $$$T_k(cos x) = cos(kx)$$$ we get that Chebyshev's polynomials alternate between touching $$$y=1$$$ and $$$y=-1$$$ on the interval $$$[-1, 1]$$$ as many times as their degree allows them to.

      We will prove that Chebyshev's polynomials yield optimal answers for our question. Let us assume that there is a polynomial $$$P$$$ of degree $$$n$$$ satisfying the problem's assumptions with bigger leading coefficient than $$$T_n$$$. Let $$$r>1$$$ be the ratio of leading coefficient of $$$P$$$ to the leading coefficient of $$$T_n$$$. Then consider polynomial $$$\frac{P(x)}{r}$$$ whose leading coefficient is the same as for $$$T_n$$$. Its image of $$$[-1, 1]$$$ is contained within $$$(-1, 1)$$$ (note it's an open interval), so from the picture it is clear that it intersects the graph of $$$T_n$$$ at least $$$n$$$ times. Hence $$$T_n(x) - \frac{P(x)}{r}$$$ has at least $$$n$$$ zeros, but it has degree smaller than $$$n$$$, hence it must be a zero polynomial — contradiction.

      Btw if we change the requirements from $$$P([-1, 1]) \subseteq [-1, 1]$$$ to $$$P([-2, 2]) \subseteq [-2, 2]$$$, then the answer is a clean $$$1$$$ :)

»
20 months ago, # |
Rev. 2   Vote: I like it +11 Vote: I do not like it

There's a much cleaner way (IMO) to get to the conclusion that $$$x^k + x^{-k}$$$ can be written as a polynomial in $$$x + x^{-1}$$$, via the idea of a norm in quadratic extensions (Pell's equation, anyone?).

Let's consider $$$x$$$ such that $$$x + x^{-1}$$$ is an even positive integer greater than $$$2$$$, say $$$2t$$$ for $$$t > 1$$$. Then $$$x = t \pm \sqrt{t^2 - 1}$$$. Let's choose the positive sign. Consider the quadratic extension $$$\mathbb{Z}[\sqrt{t^2 - 1}]$$$ of the ring of integers, where the norm is $$$N(a + b \sqrt{t^2 - 1}) = a^2 - b^2(t^2 - 1)$$$. (This is a quadratic extension since $$$t > 1$$$). Clearly, $$$x$$$ is in this quadratic extension. Remember that the norm is multiplicative. Since this is a ring, $$$x^k$$$ is in this ring too. Its norm is $$$1$$$ too, so if $$$x^k = a + b \sqrt{t^2 - 1}$$$, then $$$a^2 - b^2(t^2 - 1) = 1$$$. That is, $$$x^k = a + \sqrt{a - 1}$$$. When expanding $$$(t + \sqrt{t^2 - 1})^k$$$ via the binomial theorem, we note that $$$a$$$ is a polynomial in $$$t$$$. However, from the previous relation, we know that $$$a = x^k + x^{-k}$$$. Hence, for a fixed polynomial $$$P$$$, the identity $$$P(x + x^{-1}) = x^k + x^{-k}$$$ holds for infinitely many integers, which means that this is an identity, by FTA.

Note that if you actually expand out the whole thing using binomial theorem, you get the same expression for $$$T_k$$$ as you claimed.

Essentially, this is almost isomorphic to your argument, but has the added benefit that you don't have to deal with imaginary powers of $$$e$$$. Indeed, if you set $$$t = \cos \vartheta$$$, then tracing this argument for $$$|t| \le 1$$$, we recover the main parts of your proof.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +13 Vote: I do not like it

    Do we really need Chebyshev polynomials, or quadratic extension to prove such an intuitive thing?

    Calm down guys!!

    Spoiler
    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it +19 Vote: I do not like it

      Ooorrr $$$(x+x^{-1})^k = x^k + x^{-k} + \sum_{i=0}^{k-1}a_i (x^i + x^{-i})$$$ for some sequence $$$a_0, \ldots, a_{k-1}$$$ and we're done by induction :P

    • »
      »
      »
      20 months ago, # ^ |
      Rev. 2   Vote: I like it +33 Vote: I do not like it

      As adamant pointed out in the comment above, it's not so much about simplicity as it is about being connected to the bigger picture. You could also get the same conclusion from $$$f_k = x^k + x^{-k} \implies f_k = (x + x^{-1}) f_{k-1} - f_{k-2}$$$.

      Now using this recurrence, someone can use tridiagonal matrices to show more properties but I won't bother you with those.

      • »
        »
        »
        »
        20 months ago, # ^ |
        Rev. 13   Vote: I like it +5 Vote: I do not like it

        Oh, I like this one! So, as I understand, from this it follows that

        $$$ x^k+x^{-k} = 2 \det\begin{bmatrix} \frac{x+x^{-1}}{2} & 1 & 0 & \dots & 0 \\ 1 & x+x^{-1} & 1 & \dots & 0 \\ 0 & 1 & x+x^{-1} & \ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & 1 \\ 0 & 0 & \dots & 1 & x + x^{-1} \end{bmatrix}, $$$

        where the matrix has size $$$k \times k$$$. Neat, I like it when things are expressed directly through such determinants, as you may guess from 104234D - Triterminant!

        From this also follows that the Chebyshev polynomials can be expressed as

        $$$ T_k(x) = \det\begin{bmatrix} x & 1 & 0 & \dots & 0 \\ 1 & 2x & 1 & \dots & 0 \\ 0 & 1 & 2x & \ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & 1 \\ 0 & 0 & \dots & 1 & 2x \end{bmatrix}, $$$

        as for them the recurrence is $$$T_k(x) = 2 x T_{k-1}(x) - T_{k-2}(x)$$$.