Why is it useful to show the existence and uniqueness of solution for a PDE?

by D1X   Last Updated January 18, 2018 13:20 PM

Don't get me wrong, I understand that it is important in mathematics to qualitatively study the problems given. But I would like to know to what extent this helps, for example, to actually solve the problem.

I am reading books that deal with variational approach for elliptic PDEs like the Laplacian. Apart from transforming the problem into one of minimization of functionals, the main goal is to show the existence and uniqueness of a solution for the given PDE.

The only thing I can think of is that, for example, the Laplacian can be solved by using separation of variables. If you manage to show that the solution in separated variables is indeed a solution, you have found the solution.



Answers 7


In general it is impossible to construct an explicit solution. But the fact that you can't find a solution by standard techniques like separation of variable or Fourier analysis does not mean that a solution does not exist.

A theorem stating that your equation has one and only one solution is of fundamental importance, because it says that your problem can be solved uniquely.

Once you know that your solution exists, you can go for numerical methods to approximate it.

Anyway, mathematicians often study the qualitative properties of solutions even before they have proved that such solutions exists. A priori estimates may help us find a solution, actually.

Finally, existence is much more important than uniqueness, and quite often nonlinear equations possess more than one solution.

Siminore
Siminore
July 12, 2016 18:28 PM

The following will not fully answer your question, but may give you one (of possibly many) idea in which direction to look:

In most cases searching for an explicit solution (like you can sometimes do for the Laplacian) is in vain. PDE from application very often do involve (interestingly enough) the Laplacian, but more often they are (linear, semiliniar or nonlinear) perturbations for it. Nice examples for this arise, e.g., in the theory of minimal surfaces.

It turns out that existence and uniqueness theorems for (simple, linear) model equations are often the cornerstone for proving existence results of more complicated equations like nonlinear equations. You can often view a nonlinear elliptic PDE as a differentiable map between Banach manifolds, the derivative of which is then a linear PDE. Existence and uniqueness of linear PDEs is then the key to apply theorems like the implicit (or inverse) function theorem to the nonlinear ones (recall that the implicit function theorem asks for the derivative to be an isomorphism. For a linear PDE to act as an isomorphism a necessary condition is unique solvabilty, which give you that it's onto and one-one), or, more generally, apply theorems like the Sard - Smale theorem in case you don't know existence and uniqueness but in case you know that the Fredholm alternative holds.

Thomas
Thomas
July 12, 2016 18:29 PM

  1. Some existence proofs proceed by just writing down the solution. This isn't true of many, but when it is, I think the usefulness of these are clear. The routine examples where separation of variables works are options here.
  2. Other existence proofs are literally an approximation recipe, even though they don't come up with an explicit formula. The classic example is Picard-Lindelof in ODE: Picard iteration is one way to construct a solution to an ODE. It's not great, simply because the integrals it asks for are in general expensive to compute, but it can be done.
  3. Other existence proofs suggest approximation techniques. For example, Lax-Milgram tells us that there is just one solution to an elliptic PDE (given certain boundary conditions) and that it satisfies an infinite collection of equations drawn from the test function space. In addition, elliptic regularity tells us how nice we should expect the solution to be. This hints at the idea of the finite element method: restrict attention to just a $d$-dimensional subspace of the solution space which has the amount of regularity we should expect, and solve the system of equations given by the weak form of the equation with $d$ carefully chosen linearly independent test functions. Proofs of existence of solution to this finite dimensional system, and even convergence proofs, use some of the same techniques as used in the actual proof of Lax-Milgram itself.
  4. Uniqueness is nice to have, but non-uniqueness can lead to interesting mathematical questions as well as interesting questions in the domain of application. Basically these boil down to: how does the system that we are actually studying (the real system, not the model) "select" a solution when there are all these solutions out there? This can reveal new problems of interest. The example that comes to mind from my field is the invariant measure problem. It can happen that a deterministic dynamical system has many invariant measures, but that the one which "really matters" is the one which is obtained by adding a small noise, selecting the unique invariant measure of this perturbed system, and then sending the noise intensity to zero. This idea of stochastic perturbation is telling us something about the underlying deterministic system (since the answer is usually independent of the precise form of the noise that we chose).

Other than that, I don't have much to say beyond "it's nice to know that when we run our numerics we're looking for something that actually exists".

Ian
Ian
July 12, 2016 18:39 PM

There's an example I've stolen from here that might be of some use.

Let's say I have the equation $\sqrt{2x-1} = -x, (x \in \mathbb R)$ we can take the square of both sides and get the equation $2x-1=x^2$ which has a single, unique solution, $x=1$. We know that this single, unique solution was only a candidate for a solution; it doesn't satisfy the equation as it's originally posed.

When dealing with PDEs, there are many, many things to consider. The first one is normally the equation itself (whether it's a Laplacian or any other type of well-known operator), and the second is probably the boundary condition.

A lot of emphasis is placed on boundary conditions, and indeed the usual three have names: Dirichlet Boundary Condition Neumann Boundary Condition and Robin Boundary Condition

Differential equations can be posed in a myriad of different ways. Add in the fact that they can be in multiple dimensions, and there's a world of problems that can come about: does a solution to my PDE even exist? I mean I can set it up very easily:

$-\Delta y + y^3 = B u \text{ in } \Omega$ - A semi-linear PDE

$ y = 0 \text{ on } \partial \Omega$ - Boundary conditions for the problem

This is a hard PDE that I just happened to have in front of me. Was this an arbitrary PDE or is there a solution that we can meaningfully solve for? We want to prove that there is a solution and that if I find one, it will be unique, before I go through all the trouble of actually solving for it. Solving these things can get really hairy really fast. What if the equation had $y^2$ instead of $y^3$?

(Yes, the semi-linear PDE I posted does have a solution. I forget the implications for any other $y$-term other than $y^3$, and it says in my notes that Browder-Minty is the theorem to prove it, but I've forgotten and am hence going back over my notes.)

Bourque
Bourque
July 12, 2016 18:46 PM

PDE's often come up from the modelling of a system. Existence of a solution means that the assumptions are consistent. Uniqueness means that they are sufficient to determine the system behaviour.

Answering these questions has a pivotal role in assessing the legitimacy of a model.

filipos
filipos
July 12, 2016 19:55 PM

PDE's are far from my area of expertise, but maybe it will nevertheless be useful if I mention what I consider an aspect of existence and uniqueness theorems that I consider particularly important and useful. That aspect is not the conclusion of the theorem ("there exists a unique function such that $\dots$") but rather the hypotheses. These describe the sort of differential equation under consideration and the initial or boundary conditions that the solution is supposed to satisfy. The theorem is essentially saying that those initial or boundary conditions are the right ones for that particular sort of equation. In other words, they tell you what conditions, in addition to the differential equation, can safely be imposed (i.e., a solution still exists when you impose the conditions) and suffice to pin down the solution completely (uniqueness). Intuitively, they tell us how much flexibility is available in the solutions of the differential equation.

Andreas Blass
Andreas Blass
July 12, 2016 23:25 PM

To supplement the existing answers, I want to mention a useful application of elliptic regularity for separation of variables in particular.

Let $L$ be a second order elliptic linear operator and consider the PDE,

$$ Lu =f\ \text{ in }\ \Omega, \quad u = 0\ \text{ on }\ \partial\Omega. $$

Here $\Omega$ is a sufficiently nice space parametrised by coordinates $r,x,y$ (for simplicity $\dim \Omega =3$). Assuming the forms are explicitly known, we may try and solve this by separating variables. Doing so, we obtain a solution of the form,

$$ u(x,y) = \sum_{i,j=1}^{\infty} a_{ij}(r)\phi_i(x)\psi_j(y). $$

where $\phi_i,\psi_j$ are an $L^2$-orthonormal basis of eigenfunctions for $L$, satisfying the boundary conditions.

Now assuming the coefficients $a_{ij}$ decay sufficiently quickly, one can show that $u \in H^1(\Omega)$ with weak-derivatives obtained via term-by-term differentiation. Since we supposedly can determine everything explicitly, we can expect this to be quite easy.

However it's not clear whether this actually converges to a smooth function, or if term-by-term differentiation is valid. Even in simple cases where $\phi_i,\psi_j$ are trigonometric and we get a Fourier series, we need very strong decay assumptions to get uniform convergence of $u$ and its derivatives (in general this need not be true).

This is where elliptic theory is useful, namely the estimates. Under appropriate assumptions, one can prove an estimate of the form,

$$ \lVert u \rVert_{C^2(\bar\Omega)} \leq C\left( \lVert f \rVert_{H^m(\Omega)} + \lVert u \rVert_{L^2(\Omega)}\right). $$

This is super useful, because now we know our solution is actually regular and we can differentiate it term-by-term. Of course this doesn't give us everything, since this doesn't prove the series converges to $u$ in a nice way; just that this $L^2$ limit is $C^2.$ If we want to prove further convergence/approximation results, some more work is necessary.

ktoi
ktoi
December 23, 2017 10:23 AM

Related Questions



Weak solution in $\mathbb{R}^{N}$

Updated May 16, 2016 08:08 AM