Up | Next | Prev | PrevTail | Tail |
Author: G. Üçoluk.
The operator changevar does a variable transformation in a set of differential equations. Syntax:
changevar(\(\langle \)depvars\(\rangle \), \(\langle \)newvars\(\rangle \), \(\langle \)eqlist\(\rangle \), \(\langle \)diffeq\(\rangle \)) |
\(\langle \)diffeq\(\rangle \) is either a single differential equation or a list of differential equations, \(\langle \)depvars\(\rangle \) are the dependent variables to be substituted, \(\langle \)newvars\(\rangle \) are the new depend variables, and \(\langle \)eqlist\(\rangle \) is a list of equations of the form \(\langle \)depvar\(\rangle \) =\(\langle \)expression\(\rangle \) where \(\langle \)expression\(\rangle \) is some function in the new dependent variables.
The three lists \(\langle \)depvars\(\rangle \), \(\langle \)newvars\(\rangle \), and \(\langle \)eqlist\(\rangle \) must be of the same length. If there is only one variable to be substituted, then it can be given instead of the list. The same applies to the list of differential equations, i.e., the following two commands are equivalent
changevar(u,y,x=e^y,df(u(x),x) - log(x)); changevar({u},{y},{x=e^y},{df(u(x),x) - log(x)});
except for one difference: the first command returns the transformed differential equation, the second one a list with a single element.
The switch dispjacobian governs the display the entries of the inverse Jacobian, it is off per default.
The mathematics behind the change of independent variable(s) in differential equations is quite straightforward. It is basically the application of the chain rule. If the dependent variable of the differential equation is \(F\), the independent variables are \(x_{i}\) and the new independent variables are \(u_{i}\) (where \(\scriptstyle i=1\ldots n\)) then the first derivatives are: \[ \frac {\partial F}{\partial x_{i}} = \frac {\partial F}{\partial u_{j}} \frac {\partial u_{j}}{\partial x_{i}} \] We assumed Einstein’s summation convention. Here the problem is to calculate the \(\partial u_{j}/\partial x_{i}\) terms if the change of variables is given by \[ x_{i} = f_{i}(u_{1},\ldots ,u_{n}) \] The first thought might be solving the above given equations for \(u_{j}\) and then differentiating them with respect to \(x_{i}\), then again making use of the equations above, substituting new variables for the old ones in the calculated derivatives. This is not always a preferable way to proceed. Mainly because the functions \(f_{i}\) may not always be easily invertible. Another approach that makes use of the Jacobian is better. Consider the above given equations which relate the old variables to the new ones. Let us differentiate them: \begin {align*} \frac {\partial x_{j}}{\partial x_{i}} & = \frac {\partial f_{j}}{\partial x_{i}} \\ \delta _{ij} & = \frac {\partial f_{j}}{\partial u_{k}} \frac {\partial u_{k}}{\partial x_{i}} \end {align*}
The first derivative is nothing but the \((j,k)\) th entry of the Jacobian matrix.
So if we speak in matrix language \[ \mathbf {1 = J \cdot D} \] where we defined the Jacobian \[ \mathbf {J}_{ij} \stackrel {\triangle }{=} \frac {\partial f_{i}}{\partial u_{j}} \] and the matrix of the derivatives we wanted to obtain as \[ \mathbf {D}_{ij} \stackrel {\triangle }{=} \frac {\partial u_{i}}{\partial x_{j}}. \] If the Jacobian has a non-vanishing determinant then it is invertible and we are able to write from the matrix equation above: \[ \mathbf { D = J^{-1}} \] so finally we have what we want \[ \frac {\partial u_{i}}{\partial x_{j}} = \left [\mathbf {J^{-1}}\right ]_{ij} \] The higher derivatives are obtained by the successive application of the chain rule and using the definitions of the old variables in terms of the new ones. It can be easily verified that the only derivatives that are needed to be calculated are the first order ones which are obtained above.
The 2-dimensional Laplace equation in cartesian coordinates is: \[ \frac {\partial ^{2} u}{\partial x^{2}} + \frac {\partial ^{2} u}{\partial y^{2}} = 0 \] Now assume we want to obtain the polar coordinate form of Laplace equation. The change of variables is: \[ x = r \cos \theta , \qquad y = r \sin \theta \] The solution using changevar is as follows
changevar({u},{r,theta},{x=r*cos theta,y=r*sin theta}, {df(u(x,y),x,2)+df(u(x,y),y,2)} );
Here we could omit the curly braces in the first and last arguments (because those lists have only one member) and the curly braces in the third argument (because they are optional), but you cannot leave off the curly braces in the second argument. So one could equivalently write
changevar(u,{r,theta},x=r*cos theta,y=r*sin theta, df(u(x,y),x,2)+df(u(x,y),y,2) );
If you have tried out the above example, you will notice that the denominator contains a \(\cos ^{2} \theta + \sin ^{2} \theta \) which is actually equal to \(1\). This has of course nothing to do with changevar. One has to be overcome these pattern matching problems by the conventional methods REDUCE provides (a rule, for example, will fix it).
Secondly you will notice that your u(x,y) operator has changed to u(r,theta) in the result. Nothing magical about this. That is just what we do with pencil and paper. u(r,theta) represents the the transformed dependent variable.
Consider a differential equation which is of Euler type, for instance: \[ x^{3}y''' - 3 x^{2}y'' + 6 x y' - 6 y = 0 \] where prime denotes differentiation with respect to \(x\). As is well known, Euler type of equations are solved by a change of variable: \[ x = e^{u} \] So our call to changevar reads as follows:
changevar(y, u, x=e**u, x**3*df(y(x),x,3)- 3*x**2*df(y(x),x,2)+6*x*df(y(x),x)-6*y(x));
and returns the result
df(y(u),u,3) - 6*df(y(u),u,2) + 11*df(y(u),u) - 6*y(u)
Up | Next | Prev | PrevTail | Front |