maths with Python 3: Diffusion Equation

It has been a long time ago since my last Python tutorial, and I get a new computer since that, so.. first thing, installing again Python.

Ok, first problem. My tutorial for Python, Scipy and Matplolib is no longer valid. There is some problem with the versions and windows 8 and they don’t install properly. So I guess I need to do a new one.

I found that a good way of having Python and all the main scientific packages is to go for Anaconda.

carousel_1Anaconda is a Python distribution which includes Scipy and many more packages. It is compatible with Windows, Mac and Linux.

Step 1. Go to the downloads web page. And look for the one to your system. In my case with Windows 8 and 64bits this is the one I choose:

Windows 64-bit / 289M / md5: 80bc3fe5f8d2f83110eee775946ed3b8

Step 2. Just run the archive you downloaded. The installation is automatic. And voilà!

Step 3. Hello World. Once installed, when you run the “Launcher” this is what you see:

python

And for now, just launch spyder, the other features have no interest. It will look something like this:

wowow (I haven’t seen it before), It’s quite similar to Matlab, which I know very well. To start with it, type on the console:

print('Hello world')

Welcome to Spyder!

Now for the maths. Did you remember my previous post about how to solve the  Rössler equation system? Check it back again, because we are going to do something similar.

Today we are going to focus on the diffusion equation.

Suppose we have some kind of medium (like water for example) and we deposit a drop of ink on it. The concentration of the ink at the point (x,y,z) at the instant t will be p(x,y,z,t). If we think that the ink diffuses by random movement of the atoms (water atoms and ink atoms), and there is no preferred direction (for example because of convection currents), then the evolution of the concentration of ink will be given by the diffusion equation:

\frac{\partial p}{\partial t}=D\left(\frac{\partial^2 p}{\partial x^2}+\frac{\partial p}{\partial y^2}+\frac{\partial p}{\partial z^2} \right)

The constant D determines how fast the diffusion takes place.

This equation is also called heat equation, because it also describes how heat diffuses on a solid body.

We are going to show how to use python to solve it numerically using the finite differences method.

First we need to discretize the domain in with we want to solve the function. So, instead of finding p(x,y,z,t) for every value of x, y, z and t, we are going to seek the solution only for the points:

\left\{ \begin{matrix} x_i=i\delta x \\ y_j=j \delta y \\  z_k=k\delta z \\ t_m=m\delta t \end{matrix}\right.

i=0...N

The factors

\delta x \delta y \delta z \delta t

determine how big is the separation is between the points where we are going to approximate the solution. Obviously, the smaller the factor the closer the points and the more accurate the solution (and more time to solve it).

Ok, so now that our domain divided into a grid, let’s try to write the diffusion equation using only points in the grid. According to the definition of derivative, it is true

\frac{\mathrm{d} p}{\mathrm{d}x} =\lim_{\Delta x\rightarrow 0} { \frac{ \Delta p}{\Delta x} } =\lim_{\Delta x\rightarrow 0}{ \frac{p(x_{i+1})-p(x_{i})}{ x_{i+1}-x_i}}

To make it more readable we can use

p_i=p(x_i)

If now instead of the real derivative we want the numerical approximation, we can take any of these:

Forward Difference:

\left. \frac{\mathrm{d} p}{\mathrm{d} x} \right|_{x_i} \approx \frac{p_{i+1}-p_i}{x_{i+1}-x_i}=\frac{p_{i+1}-p_i}{\delta x}

Backward Difference:

\left. \frac{\mathrm{d} p}{\mathrm{d} x} \right|_{x_i}\approx \frac{p_{i}-p_{i-1}}{x_{i}-x_{i-1}}=\frac{p_{i}-p_{i-1}}{\delta x}

Central Difference:

\left. \frac{\mathrm{d} p}{\mathrm{d} x} \right|_{x_i} \approx \frac{p_{i+1}-p_{i-1}}{x_{i+1}-x_{i-1}}=\frac{p_{i+1}-p_{i-1}}{2\delta x}

There is no complication of going from total to partial derivatives in all these definitions (let’s do it from now on).

For the next step we are going to use a trick. Let’s take a point between two points of our grid

x_{i+\frac{1}{2}}

And write the central differences for two of those points:

\left. \frac{\partial p}{\partial x} \right|_{x_{i+\frac{1}{2}}} \approx \frac{p_{i+1}-p_{i}}{x_{i+1}-x_{i}}=\frac{p_{i+1}-p_{i}}{\delta x}

\left. \frac{\partial p}{\partial x} \right|_{x_{i-\frac{1}{2}}} \approx \frac{p_{i}-p_{i-1}}{x_{i}-x_{i-1}}=\frac{p_{i}-p_{i-1}}{\delta x}

Nice trick. Take your time to think about it and the grid.

Why to do this trick? Why not? We can select the grid we want. Doing so is a way of visualizing what is happening, instead we can start from the beginning with more points and nothing will change.

Using the trick, we can now write the second partial derivative as

\frac{\partial^2 p}{\partial x^2} =\lim_{\Delta x\rightarrow 0} { \frac{ \Delta \left(\frac{\partial p}{\partial x} \right)}{\Delta x} }

And that will lead to

\left. \frac{\partial^2 p}{\partial x^2} \right|_{x_{i}} \approx \frac{\left. \frac{\partial p}{\partial x} \right|_{x_{i+\frac{1}{2}}}-\left. \frac{\partial p}{\partial x} \right|_{x_{i-\frac{1}{2}}}}{x_{i+\frac{1}{2}}-x_{i-\frac{1}{2}}}=\frac{p_{i+1}-2p_{i}+p_{i-1}}{(\delta x)^2}

Now we can put together all the results and with the correct sub index. They look like…

\left\{ \begin{matrix} \left. \frac{\partial^2 p}{\partial x^2} \right|_{x_{i},y_{j},z_{k},t_{m}} \approx \frac{p_{i+1,j,k,m}-2p_{i,j,k,m}+p_{i-1,j,k,m}}{(\delta x)^2}    \\  \left. \frac{\partial^2 p}{\partial y^2} \right|_{x_{i},y_{j},z_{k},t_{m}} \approx \frac{p_{i,j+1,k,m}-2p_{i,j,k,m}+p_{i,j-1,k,m}}{(\delta y)^2}    \\    \left. \frac{\partial^2 p}{\partial z^2} \right|_{x_{i},y_{j},z_{k},t_{m}} \approx \frac{p_{i,j,k+1,m}-2p_{i,j,k,m}+p_{i,j,k-1,m}}{(\delta z)^2}    \\    \left. \frac{\partial p}{\partial t} \right|_{x_{i},y_{j},z_{k},t_{m}} \approx \frac{p_{i,j,k,m+1}-p_{i,j,k,m}}{\delta t}    \end{matrix}\right.

OMG They look so complex! Not at all. If we introduce these expressions into the diffusion equation again and write the future value respect to the previous one…

\begin{matrix} \frac{p_{i,j,k,m+1}-p_{i,j,k,m}}{\delta t}= D \left( \frac{p_{i+1,j,k,m}-2p_{i,j,k,m}+p_{i-1,j,k,m}}{(\delta x)^2}+ \right. \\ \left.\frac{p_{i,j+1,k,m}-2p_{i,j,k,m}+p_{i,j-1,k,m}}{(\delta y)^2} +\frac{p_{i,j,k+1,m}-2p_{i,j,k,m}+p_{i,j,k-1,m}}{(\delta z)^2} \right) \end{matrix}

And voilà! If we have the solution for all the points x, y, z at one instant of time, we can find the solution recursively for all the other instants!!!

Are we ready now for the Python code? No, not yet. Just two more things. we know that an initial condition is needed in order to solve the equation recursively, but what about boundary conditions?

There is mainly 3 types of boundary conditions.

Equal to a constant.
One example of this is that the boundaries of the problem are kept at a constante temperature (for instante, simulating an object inside a fridge or in an oven). This is achieved simple with

p_{0,j,k,m} = \alpha \phantom \forall j,k,m \phantom p_{N,j,k,m}=\beta \phantom \forall j,k,m

p_{i,0,k,m} = \gamma \phantom \forall i,k,m \phantom p_{i,N,k,m}=\delta \phantom \forall i,k,m

p_{i,j,0,m} = \mu \phantom \forall i,j,m \phantom p_{i,j,N,m}=\nu \phantom \forall i,j,m

Typically, for the variable t instead of constants is more common using a density distribution only at the beginning.

p_{i,j,k,0}=\eta(x_i,y_j,z_k)

Zero flux.
This condition involves that nothing can cross the boundaries of the problem. This kind of condition can be for example ink in water inside a glass. The ink can difusse on the water, but cannot cross the walls of the glass.

To stablish this condition, we use the forward difference approximation

\left. \frac{\mathrm{d} p}{\mathrm{d} x} \right|_{0,j.k.m} \approx \frac{p_{1,j,k,m}-p_{0,j,k,m}}{\delta x}

Making it equal to zero leads to:

p_{1,j,k,m} = p_{0,j,k,m}

And appliying the same trick to each boundary:

p_{N,j,k,m} = p_{N-1,j,k,m}

p_{i,1,k,m} = p_{i,0,k,m}

p_{i,,N,k,m} = p_{i,N-1,k,m}

p_{i,j,1,m} = p_{i,j,0,m}

p_{i,j,N,m} = p_{i,j,N-1,m}

Periodic boundaries.

This boundary implies that the boundaries are connected, if we go enough time into the x direction we will eventually come back again to the starting position.  This can be usefull when we are simulating a large periodic body and the same conditions must appear everywhere. It’s as simple as this:

p_{N,j,k,m} = p_{0,j,k,m}

p_{i,N,k,m} = p_{i,0,k,m}

p_{i,j,N,m} = p_{i,j,0,m}

Of course all this conditions can be mixed to create the kind of problem we need to solve.

Ok. Are we ready now? No, not yet, only one more thing.

Stability conditions.

When we deal with numerical approximations is crucial that the conditions we use meet the requirements to find a convergent solution. In this case, the size of the grid, the distant between approximations of the solutions, are the conditions we must look carefully. For the finite elements method with the approximations we have made, the requirement is:

\frac{\delta t}{(\delta x)^2 < frac{1}{2}}

Ok, so now we are ready.

The problem we are going to solve is a 2D square plate where 20 points inr andom positions have either +1 or -1 temperature, also randomly.

For the boundary conditions we are going to use free boundaries, no cosntant value, no periodic and no zero flux. This is just an example, is not correlated with any real problem I can imagine.

dt=0.002;
Dz=1;
dx=1;
dy=1;
#Those parameters fix the number of points in the grid. For isntance, the total x length of the plate will be dx*nx
nx=120;
ny=120;
nt=50000;
#We precharge the p matrix which will have inside the numerical solutions.
p=zeros([nx,ny,nt]);
#Here we set the boundary conditions for t (can be call initial conditions). It is going to be 20 hot or cold points in random positions.
for f in range(20):
p[round((nx-1)*rand()),round((ny-1)*rand()),0]=sign(rand()-0.51);
for m in range(1,nt):
#A simple implementation (but not quite eficient) will be iterating each point at a time. Because we have a matrix, we can operate with whole sections of the matrix at each time.
#Basically, we take time slices and operate them as a whole. To use centered differences, we simply shift the matrix one element in the x direction or in the y direction.
p[1:nx-1,1:ny-1,m]=p[1:nx-1,1:ny-1,m-1]+dt*Dz*((p[2:nx,1:ny-1,m-1]-2*p[1:nx-1,1:ny-1,m-1]+p[0:nx-2,1:ny-1,m-1])/np.power(dx,2)+(p[1:nx-1,2:ny,m-1]-2*p[1:nx-1,1:ny-1,m-1]+p[1:nx-1,0:ny-2,m-1])/np.power(dy,2));

#Finally we plot several iterations
fig = figure()
for g in range(18):
subplot(3,6,g+1)
axis([0,nx,0,ny])
pcolor(p[:,:,round(g*(nt-1)/17)],vmin=-0.001,vmax=0.001,cmap='jet')
axis('off')

It took more time to explain the method than the code itself. Amazing.

This is the output from the code.

Untitled

Nice. Pretty nice.

And that was all, hope you like it.

.

.

.

.

.

Ok, here it is an animation of the evolution of the heat for that particular solution.

heat

Extra References:

http://www.me.ucsb.edu/~moehlis/APC591/tutorials/tutorial5/tutorial5.html

http://dournac.org/info/parallel_heat2d

Advertisements

6 thoughts on “maths with Python 3: Diffusion Equation”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s