Posted by: cnpde | December 1, 2014

254A, Notes 1: Elementary multiplicative number theory

What's new

In analytic number theory, an arithmetic function is simply a function $latex {f: {bf N} rightarrow {bf C}}&fg=000000$ from the natural numbers $latex {{bf N} = {1,2,3,dots}}&fg=000000$ to the real or complex numbers. (One occasionally also considers arithmetic functions taking values in more general rings than $latex {{bf R}}&fg=000000$ or $latex {{bf C}}&fg=000000$, as in this previous blog post, but we will restrict attention here to the classical situation of real or complex arithmetic functions.) Experience has shown that a particularly tractable and relevant class of arithmetic functions for analytic number theory are the multiplicative functions, which are arithmetic functions $latex {f: {bf N} rightarrow {bf C}}&fg=000000$ with the additional property that

$latex displaystyle f(nm) = f(n) f(m) (1)&fg=000000$

whenever $latex {n,m in{bf N}}&fg=000000$ are coprime. (One also considers arithmetic functions, such as the logarithm function $latex {L(n) := log n}&fg=000000$ or the von Mangoldt function, that…

View original post 18,449 more words

Posted by: cnpde | March 21, 2013

What is good math? (I)

This is a rather big title for me, maybe for most people even for many professional mathematicians. However, I’d like to write some words here, just for a record for my present understanding.

I began thinking this question a long time ago, when I made up my mind to be a mathematician. To answer this question, we need to know the definition of math. So what is mathematics. Most people begin to learn math in kindergartens, and this process never pause even till university. I didn’t know what is math until I finished my undergraduate study. From the undergraduate education, we can generally define math as a subject that deal with  quantity, structure, space and change.

Each of them comes from the process when human beings began to learn the world they live in. Math comes from the concrete objects, but not limited to special objects. Mathematicians always try to include more objects in one mathematical concept by subtracting more general properties from concrete things. Than to study mathematical concepts, we can obtain more general properties for many concrete things. From this point of view, we can see that one kind of good math is to construct new theory or concept in which we can include several old ones. This kind of taste is similar with that of physics. It is ambitious. Hundreds of physicists, including many famous ones like Einstein, have spent more than one hundred years to look for a unified theory to explain our world, but in vain. There seems always a balance in it. Paying attention to generality may sacrify special characters. Taking PDE as an example. After Hormander (who studied linear PDE in a systematic way), no one can find a general theory for nonlinear PDE. Because different equations have different properties, and the method for one equation usually does not work for another. This lead to the fact that nowadays, most of mathematicians in this field usually focus on one type of equations in the whole life time. It seems now in the field of PDE, the top work is to introduce new tools to solve problems.

One view is:

1st class is introducing new concept and building new theory.

2nd class is expressing new ideas and making new tools.

If judging it in this view, PDE is far from the good math.

Another kind of good math is the subject which can connect one branch with the other. One example is Gauss-Bonnet theorem in differential geometry. This is good because it connects geometry and topology. It provides a bridge to understand two different branches as a whole. Another field share the same character may be the geomety analysis, which is using analysis tools (mainly PDE) to study geometry.

To be continued……

Posted by: cnpde | September 21, 2012


Recently I begin to touch dynamical systems. My motivation is for PDE, but let me delay it for a while. Here I just want to talk a little about my feeling on DS and PDE.

After these days learning, I have a strong feeling that PDE is a mass (I do not want to be rude here, so I did not use the word ‘shit’)! I say so because I found dynamical system is such a beautiful subject. It can describe the evolution system in such a detailed way, which can never be reached for PDE. In dynamical systems, the properties we care includes repeatability and stability. The phenomenon that just happen once and will not be repeated is not the centre of research. Judging from the history of DS, we can understand this easily. The dynamic systems used to be developed for understanding the motion of planet. Most planets have periodic orbits. They will ‘visit’ earth in some definite frequency. That’s why people care about the phenomenon that can be repeated.

After people realized the earth is moving around the sun, a concern is spread among some educated persons, that is, will the earth falling to the sun? The stability of dynamic system comes from this question. In the theory of hyperbolic dynamical systems, the manifold can be divided into two or three parts: stable part, unstable part, (sometimes we may have) central part. This properties can be preserved under a map, called Anosov map. If this map is a diffeomorphism, we can get structural stability, which is helpful to classify dynamic systems. More detailed analysis may reveal that many dynamical phenomena are only related with topological properties, not with differential properties. So much energy has been put into study of topological dynamical systems. Mathematicians have constructed a great mansion to describe the phenomenon in a very detailed way.

However, when we get back to PDE, we may find the situation is really dispointing. In PDE, at this moment, people is still struggling on the well-posedness. This is good, because mathematicians are usually very careful for the objects that may not exist. Except that, what we know for a system is rather limited. We may get some assymptotic properties, and also local wellposedness, but, what is the middle part? We may see it clearly through following example. One of the million dollar problem is global wellposedness of Navier-Stokes equation, which is used to describe the motion of fluid. Currently, we have some results for local wellposedness, and also some assymptotic estimate. But we know almost nothing about the situation of middle part (This may be improper, because some dynamical results have been obtained, such as bifercation, etc, but still very limited). If we think in this way, then we may find the turbulence is just something happen in the middle time period. But we cannot describe it by NS equation (Here I do not consider whether this model is good or not). So PDE theory is far from being perfect, especially for nonlinear PDE. It has been called ‘dirty math’, because we have no general mathods to deal with nonlinear PDEs.

So much about the frustrating side. Above discussion also bring us good news. There are still many open problems in PDE, and we need to introduce new tools, new ideas into PDE research to help us understand the solution better!

Posted by: cnpde | September 18, 2012

Basics of Riemann Geometry 3


I will devote this article to Cartan’s package.

Assume u is a vector field on \mathcal{U}\subset M. The flow of u is denoted as \varphi_t. Suppose \varphi_t: \mathcal{U} \rightarrow \mathcal{W} is diffeomorphism, and its inverse is \varphi_{-t} with push-forward \varphi_{-t*}: T\mathcal{W}\rightarrow T\mathcal{U}.

Definition1: For vector field u with flow \varphi_t, the Lie derivative \mathfrak{L}_u: T_s^r\rightarrow T_s^r of a tensor is defined as (\mathfrak{L}_u \Phi)(x)=\lim\limits_{t\rightarrow 0}\frac{1}{t}[\varphi_{-t*}\Phi(x)-\Phi(x)].

Definition2: For a vector u, define interior product of tensor \Phi with vector u as i_u: T_r(V)\rightarrow T_{r-1}(V): \Phi\mapsto i_u\Phi, satisfying

i_u\Phi(u_1,..., u_{r-1})=\Phi(u, u_1,...,u_{r-1}).

Cartan’s package is

[d,d]=0, \quad [i_X,i_Y]=0

[d,\mathfrak{L}_X]=0,\quad [\mathfrak{L}_X,i_Y]=i_{[X,Y]}

[\mathfrak{L}_X,\mathfrak{L}_Y]=\mathfrak{L}_{[X,Y]}, \quad [d,i_X]=\mathfrak{L}_X.


1) Assume \Phi\in T_s^r





Proof: LHS=\lim\limits_{t\rightarrow 0}\frac1t[\Phi(\varphi_t(x))(\varphi^*_tw_1(\varphi_t(x)),..., \varphi_t^*w_r(\varphi_t(x)),\varphi_{t*}u_1(\varphi_t(x)),...,

\varphi_{t*}u_s(\varphi_t(x))) -\Phi(x)(w_1(x),...,w_r(x),u_1(x),...,u_s(x))]
=\lim\limits_{t\rightarrow 0}\frac1t[\Phi(\varphi_t(x))(\varphi^*_tw_1(\varphi_t(x)),..., \varphi_t^*w_r(\varphi_t(x)),\varphi_{t*}u_1(\varphi_t(x)),...,

\varphi_{t*}u_s(\varphi_t(x))) -\Phi(\varphi_t(x))(w_1(\varphi_t(x)),..., w_r(\varphi_t(x)),u_1(\varphi_t(x)),...,u_s(\varphi_t(x))]
+\lim\limits_{t\rightarrow 0}\frac1t[\Phi(\varphi_t(x))(w_1(\varphi_t(x)),..., w_r(\varphi_t(x)),u_1(\varphi_t(x)),...,u_s(\varphi_t(x))

=\sum\limits_{i=1}^r \Phi(w_1,...,\mathfrak{L}_uw_i,...,w_r,u_1,...,u_s)

+\sum\limits_{i=1}^s \Phi(w_1,...,w_r,u_1,...,\mathfrak{L}_uu_i,...,u_s)


2) [\mathfrak{L}_u,\mathfrak{L}_v]=\mathfrak{L}_{[u,v]}

Proof by induction:


Suppose it holds for \Phi, then we can prove it also holds for df\wedge \Phi.

3) [d,d]=0.

Proof: [d,d]\Phi=2d^2\Phi=0.


Proof: dL_u\Phi=d\lim\frac{\Phi(\varphi_t(x))-\Phi(x)}{t}=\lim\frac{d\Phi(\varphi_t(x))-d\Phi(x)}{t}=L_ud\Phi.

5) [i_u,i_v]=0.

Proof: i_ui_v\Phi(w_3,...,w_s)+i_vi_u\Phi(w_3,...,w_s)


due to skey-symmetry.

6) [L_u,i_v]=i_{[u,v]}.

Proof: [L_u,i_v]\Phi=L_ui_v\Phi-i_vL_u\Phi

=\lim\limits_{t\rightarrow 0}\frac{\Phi(\varphi_t(x)(v(\varphi_t(x)),u_2(x),...,u_r(x))-\Phi(x)(v(x),u_2(x),...,u_r(x)}{t}

-\lim\limits_{t\rightarrow 0}\frac{\Phi(\varphi_t(x)(v(x),u_2(x),...,u_r(x))-\Phi(x)(v(x),u_2(x),...,u_r(x)}{t}

=\lim\limits_{t\rightarrow 0}\frac{\Phi(\varphi_t(x)(v(\varphi_t(x)),u_2(x),...,u_r(x))-\Phi(\varphi_t(x)(v(x),u_2(x),...,u_r(x))}{t}



7) [d,i_u]=L_u.

Proof by induction: [d,i_u]f=i_udf=uf=L_u.

Supppose it holds for \Phi, prove it for df\wedge \Phi.


When we look back, we may find it has many terms on RHS because the manifold is not flat, i.e. the frame is changing with manifold. So when we take derivative, we not only need to differentiate the component of the tensor fields, which is a function of manifold, but also need to differentiate the frame, which also rely on the space point. In the end, we get several terms when we calculate the derivative.

Posted by: cnpde | September 16, 2012

Fourier series 1

Fourier analysis is a power tool for PDE problem on \mathbb{R}^n or \mathbb{T}^n. For \mathbb{R}^n, we usually use Fourier transform

\hat f(\xi)=\int_{\mathbb{R}^n} e^{-ix\xi}f(x)dx;

while in the case of \mathbb{T}^n, we calculate Fourier coefficients \hat f(k).

These two cases are much similar, especially for the algebraic properties. However, the difference between L^p and l^p and other difference between \mathbb{R}^n and \mathbb{T}^n will show they are two different things.

In this set of posts, I will discuss something about Fourier series. The main topic may be focused on:

1) Decay of Fourier coefficients \hat f(\xi) and smoothness of function f(x).

2)  Pointwise convergence.

3) Bochner-Riesz summation.

4) Lacunary series.

Fourier transform is mainly used to measure oscillation. We can look functions \cos \sin as signals. The higher frequency, the more oscillation. By calculating \hat f(k), we ‘take out’ the wave with frequency k. The larger |\hat f(k)|, the larger the amplitude, and the heavier of this frequency weighs in the function f.

By Riemann-Lebesgue lemma, we know for L^1 function f, |\hat f(k)| approaches to 0 as |m|\rightarrow 0, and this rate of decay is arbitarily slow.

For smooth functions, we have the decay estimate

|\hat f(m)|\leq C_{s,n}\frac{\sup\limits_{|\alpha|=s}|\partial^\alpha f(m)|}{|m|^s}.

Conversely, if the Fourier coefficients decay as

|\hat f(m)|\leq C(1+|m|)^{-s-n} for all m\in \mathbb{Z}^n, then f has partial derivative of orer |\alpha|\leq [[s]], where [[s]] is the largest integer strictly less than s.

For bonded vatiational function, |\hat f(m)|\leq \frac{Var (f)}{2\pi m}.

Posted by: cnpde | September 12, 2012

Basics of Riemann Geometry 2

In this post, I will discuss the most brilliant part of differential geometry — Exterior differential calculus. Before that I will make some preparation.

Following last post of this series, we already have tangent and cotangent vector space at point p, denoted as T_p and T_p^*. Collecting the space for all point p on the manifold, we get bundles. We have known how to define tensor for finite dimensional vector space. Now we are going to define tensor on tangent or cotangent spaces, which are infinite dimensional vector spaces. Define (r,s)- type tensor space of manifold M at p as follows:

T_s^r(p)=T_p\otimes...\otimes T_p\otimes T_p^* \otimes T_p^*.

After collecting all tensor space for all point of $M$, we get tensor bundle T_s^r. The natual projection \pi from T_s^r to M is the bundle projection, while T_s^r(p) is the fibre of the bundle T_s^r at point p.

Assume f: M\rightarrow T_s^r is a smooth mapping, if \pi\circ f=id: M\rightarrow M, then we say f is a smooth section of tensor bundle T_s^r, or a (r,s) type smooth tensor field. The definition is easy to extend to exterior differential form by anti-symmetrizing.

We can define exterior vector bundles and exterior form bundles as

\Lambda^r(M)=\bigcup\limits_{p\in M}\Lambda^r(T_p)

\Lambda^r(M^*)=\bigcup\limits_{p\in M}\Lambda^r(T_p^*).

So exterior differential form of degree r on M is a smooth map M\rightarrow \Lambda^r(M^*) as a section of exterior form bundles. Another view is to look it as a smooth map  T(M)\times ...\times T(M)\rightarrow C^\infty(M), which is multi-linear and alternating.

Posted by: cnpde | September 10, 2012

Basics of Riemann Geometry 1

Manifold is an important geometric object in modern mathematics. It is the extension of Euclid space, which is defined as a Hausdorff space locally homeomorphic to an open subset of Euclidean space  \mathbb{R}^m.  On each subset we can define chart, so that locally the manifold is exactly same with Euclidean space.

After we defined the manifold, we can define smooth functions on each subset of the manifold, however, all the smooth functions defined near a point p, under the operations + and multiplication with real numbers, cannot form a linear space, because the nul element is not defined uniquely. So we take the equivalent class, which is called C^\infty-germ of manifold at point p. All the germs form a vector space, denoted as \mathcal{F}_p.

By defining parametric curve \gamma , we can define a linear functional on  \mathcal{F}_p, <<\gamma, [f]>>= \frac{d(f\circ \gamma)}{dt} |_{t=0} . Using this notation, we can define a subspace of \mathcal{F}_p by

\\ \mathcal{H}_p=\{[f]\in \mathcal{F}_p | <<\gamma,[f]>>=0, \forall \gamma\in \Gamma_p\} ,

where \Gamma_p is the set of all parametric curve through p.

The quotient space \mathcal{F}_p/\mathcal{H}_p is the cotangent space of manifold at p, denoted as T_p^*. The element in this set is called cotangent vector, denoted as (df)_p.

Define equivalent relation in \Gamma_p as


then each equivalent class [\gamma] is a linear functional on cotangent space T^*_p.  All the equivalent classes comprise tangent space T_p.

An element X in tangent space T_p can be viewed as a linear functional on C_p^\infty by denoting Xf=<X,(df)_p>, which we called directional derivative. Here we can also get another definition for tangent space T_p, which is the set of all linear operators on C_p^\infty satisfying X(fg)=f(p)\cdot Xg+g(p)\cdot Xf.


It is useful to express tangent vector and cotangent vector by linear combinition of natural basis.

X=\sum\limits_{i=1}^m\xi^i\frac{\partial}{\partial u^i},

a=\sum \limits_{i=1}^ma_idu^i,

where \xi^i=\frac{d(u^i\circ\gamma)}{dt}, a_i=\frac{\partial f}{\partial u^i}.

Posted by: cnpde | September 3, 2012

Introduction of this blog

I will start this new blog to publish some essays on mathematics. Due to my research interests, the content of the article may be more on analysis of PDE, Fourier analysis, fluid mechanics and Symplectic geometry.


Welcome and enjoy!