We show particular techniques to solve particular types of first order differential equations. The techniques were developed in the eighteenth and nineteenth centuries and the equations include **linear equations, separable equations, Euler homogeneous equations, **and** exact equations**. This way of studying differential equations reached a dead end pretty soon. Most of the differential equations cannot be solved by any of the techniques mentioned in the first sections. Instead of solving the equations they tried to show **whether an equation has solutions or not**, and what

**properties**such solution may have. This is less information than obtaining the solution, but it is still valuable information.

#### Linear Constant Coefficient Equations

For example, let us mention Newton’s and Lagrange’s equations for classical mechanics, Maxwell’s equations for classical electromagnetism, Schrödinger equation for quantum mechanics, and Einstein’s equation for the general theory of gravitation.

Let us look at few real-life examples:

**(a) Radioactive Decay**: The amount u of a radioactive material changes in time as follows,

\(\frac{\text{d}u}{\text{d}t}\left(\begin{array}{c}t\end{array}\right)= -ku\left(t\right), \) \( k>0, \)

where k is a positive constant representing radioactive properties of the material.

This is **a First Order **Ordinary Differential Equation (**ODE).**

**(b) ****Newton’s law: **Mass times acceleration equals force, \(ma= f\) where \(m\) is the particle mass,

\(m\frac{\text{d}^{2}x}{\text{d}t^{2}}(t)= f\left(t,x(t),\frac{\text{d}x}{\text{d}t}(t)\right),\)

where the unknown is \(x(t)\)—the position of the particle in space at the time \(t\). As we see above, the force may depend on time, on the particle position in space, and on the particle velocity.

This is a **Second Order Ordinary Differential Equation (ODE)**.

In both the examples mentioned above, we notice that the *unknown function* is dependent on a** single** Independent Variable. This type of equation is called as “

*Ordinary Differential Equation*“.

We know, that the order of a differential equation is the highest derivative order that appears in the equation. Hence, the **T****ime decay equation** in example (a) is* first order* and **Newton’s equation** in example (b) is *second order*.

#### Linear Differential Equation

A first order ODE on the unknown y is \(y'(t)= f(t,y(t)),\)

where \(f\) is given and \(y’ = \frac {dy}{dt} \).

The equation is *linear* if the source function \(f\) is linear on its second argument, \(y’ = a(t)y + b(t)\).

The linear equation has **constant coefficients **(as shown in example a) if both a and b above are constants. Otherwise the equation has ** variable coefficients **(as shown in example b). Let’s discuss few examples.

Examples:

a. \(y’=3y+5\)

b. \(y’=-\frac{\text{2}}{\text t}y+5t\)

Let’s solve a *linear differential equation* problem,

** Problem 1:** \(y’ = 2y +3 \)

Let’s integrate on both sides with respect to \(t\),

\(\int_{}^{} y'(t)dt = 2\int_{}^{} y(t)dt+3t+c,\) \(c\in R \)

According to the Theorem of Calculus, \(y(t)=\int_{}^{}y'(t)dt\),

so we get, \(y(t)=2\int_{}^{} y(t)dt+3t+c\).

Integrating both sides of the differential equation is not enough to find a solution y. We still need to find a primitive of y. We have only rewritten the original differential equation as an integral equation. Simply integrating both sides of a linear equation does not solve the equation.

We now state a precise formula for the solutions of constant coefficient linear equations. The proof relies on a new idea—a clever use of the chain rule for derivatives.

__THEOREM of Constant Coefficients:__

The linear differential equation, \(y’=ay+b\), where \(a\neq0,b\) constants has infinitely many solutions,

\(y(t)=ce^{at}-\frac{b}{a}\), \(c \in R\).

This is the general solution of the differential equation \(y’=ay+b\).

Theorem says that the equation \(y’=ay+b\) has infinitely many solutions, one solution for each

value of the constant c, which is not determined by the equation.

#### The Integrating Factor Method

The *Theorem of Constant Coefficients *cannot be generalized in a simple way to all linear equations with variable coefficients. However, there is a way to solve linear equations with both constant and variable coefficients—the **Integrating Factor Method.**

The step-by-step approach for Solving the numerical using Integrating Factor Method is as follows:

Step 1: Make sure your Linear first order ODE is arranged in the following form.

\(\frac{\text{d}y}{\text{d}x}+P(x)y=Q(x)\), where \(P(x)\) and \(Q(x)\) can be functions of x or constants.

Step 2: Evaluate the value of **Integrating Factor **by** **using the formula, \(\mu = e^{\int_{}^{}P(x)dx}\).

Step 3: Multiply the equation obtained in Step 1 by \(\mu\) as shown below.

\(\mu .\frac{\text{d}y}{\text{d}x}+\mu .P(x)y= \mu .Q(x)\).

If you notice carefully here, the LHS part of this equation represents the *Product Rule. *

Thus, the LHS part can be modified as \(\mu .\frac{\text{d}y}{\text{d}x}+\mu .P(x)y= \frac{\text{d}(\mu .y)}{\text{d}x}\)

Step 4: Integrate both sides with respect to x.

\(\mu .y= \int_{}^{}\mu .Q(x)dx\)

Step 5: Divide both sides of the obtained equation by \(\mu \).

Let us solve a problem.

** Problem 2: ** \(\frac{\text{d}y}{\text{d}x}+3y=x\)

Since, the equation is in correct form, we skip the Step 1 for this numerical.

Step 2: \(\mu = e^{\int_{}^{}}3dx\)

\( \mu = e^{3x} \)

Step 3: \(e^{3x}(\frac{\text{d}x}{\text{d}y})+e^{3x}(3y)=e^{3x}(x)\)

Modified as, \(\frac{\text{d}(e^{3x}y)}{\text{d}x}=e^{3x}(x)\), using *Product Rule*.

Step 4: Integrating both sides, \(e^{3x}.y=\int_{}^{}e^{3x}(x)dx\)

we get, \(e^{3x}.y=\frac{x.e^{3x}}{3}-\frac{e^{3x}}{9}+c\)

Step 5: By dividing both sides by \( \mu \), we get the solution of as

\(y=\frac{x}{3}-\frac{1}{9}+\frac{c}{e^{3x}}\)

__Code:__

` ````
```// Integrating Factor Method
%use s2
// y' + yP(x) = Q(x) General Form
// Example: y' + 3y = x
// Here P(x) = 3 and Q(x) = x
// Integrating Factor = exp(integral(P(x)))
val IF: UnivariateRealFunction = object : AbstractUnivariateRealFunction() {
override fun evaluate(x: Double): Double {
return Math.exp(3 * x) // integral(P(x)) = 3x
}
}
// I = integral(IF*Q(x))
val IFqx: UnivariateRealFunction = object : AbstractUnivariateRealFunction() {
override fun evaluate(x: Double): Double {
return x*Math.exp(3 * x) // integral(P(x)) = 3x and Q(x) = x
}
}
val integrator: Integrator = Riemann()
val I: Double = integrator.integrate(IFqx, 0.0, 1.0)
// y*IF = I
// y = I/IF
val if2 = IF.evaluate(1.0) - IF.evaluate(0.0)
val ans = I/if2
println("The answer for y' + 3y = x from 0 to 1 is %f%n".format(ans))

__Output:__

#### The Initial Value Problem

Sometimes in physics one is not interested in all solutions to a differential equation, but only in those solutions satisfying extra conditions. For example, in the case of Newton’s second law of motion for a point particle, one could be interested only in solutions such that the particle is at a specific position at the initial time. Such condition is called an initial condition, and it selects a subset of solutions of the differential equation. An initial value problem means to find a solution to both a differential equation and an initial condition.

** Initial Conditions** are values of the solution and/or its derivative(s) at specific points. Solutions to differential equations are

**unique**and hence

*only one solution*will meet the given

*. You may refer this page to have a deeper understanding about Initial Conditions.*

**initial conditions**An ** Initial Value Problem** (abbreviated as

**IVP**) is a differential equation along with number of initial conditions specified. It is the problem of finding a function \(y\) of \(x\) when we know its derivative and its value \(y_{0}\) at a particular point \(x_{0}\).

Let’s discuss an example to understand the concept:

__Problem 3:__ \(\frac{\text{d}y}{\text{d}x}=6x-3;\) \(y(0)=4\)

- Multiplying the \( dx \) term on both side of the equation \(dx.\frac{\text{d}y}{\text{d}x}=(6x-3)dx\)
- We get, \(dy=(6x-3)dx\)
- Integrating both the sides of the obtained equation, \(\int_{}^{}dy=\int{}^{}(6x-3)dx\)
- Hence, we get the general solution as \(y=\frac{6x}{2}^{2}-3x+c\)
- By Initial Conditions, we know that when \(x=0\), \(y=4\).
- Therefore, substituting the value of \(x\) as \(0\) and \(y\) as \(4\) in the obtained general solution, \(y=\frac{6(0)}{2}^{2}-3(0)+c\)
- We get the value of \(c\) as \(4\).
- The general solution can be modified as, \(y=3{x}^{2}-3x+4\)

__Code:__

` ````
```//Initial Value Solution for linear differential equation
%use s2
// define a linear function f(x)
val df2: UnivariateRealFunction = object : AbstractUnivariateRealFunction() {
override fun evaluate(x: Double): Double {
return (6*x) - 3 // y' = 6x-3
}
}
val in3: UnivariateRealFunction = object : AbstractUnivariateRealFunction() {
override fun evaluate(x: Double): Double {
return 3*x*x - 3*x // y = 3x^2 - 3x + c
}
}
val y_at_0 = 4.0 // y(0) = 4
val initial_value = in3.evaluate(0.0) + y_at_0
val integrator: Integrator = Riemann()
val I: Double = integrator.integrate(df2, 1.0, 2.0) + initial_value
println("The answer for y'= 6x - 3 when y(0) = 4 from 1 to 2 is %f%n".format(I))
// plot a function
val p = JGnuplot(false)
p.setTitle("General Solution Plot")
p.setTitleFontSize(14)
p.addPlot("3*x*x - 3*x + 4")
p.plot()

__Output:__

#### Linear Variable Coefficients Equations

__THEOREM of Variable Coefficients:__

If the functions \(a, b \) are continuous, then \(y’=a(t)y+b(t)\), has infinitely many solutions given by \(y(t) = c e^{A(t)}+e^{A(t)}\int_{}^{}e^{-A(t)}b(t)dt\),

where \(A(t)=\int{}^{} a(t)dt\) and \(C \epsilon R\).

Here, the general solution of equation is \(y(t) = c e^{A(t)}+e^{A(t)}\int_{}^{}e^{-A(t)}b(t)dt\) and the integrating factor is represented by \(\mu (t)=e^{-A(t)}\).

** Problem 4: **Find all solutions of the equation,

**\(ty’ = -2y + 4{t}^{2}\), with \(t>0 \).**

- \( y’=-\frac{2}{t}y+4t\) implies that \(a(t)= -\frac{2}{t}\) and \(b(t)=4t\).
- Thus, equation can be modified as \(y’ + \frac {2}{t} y = 4t \).
- Multiplying by both sides by \(\mu \), \(\mu y’ + \mu \frac{2}{t} y =\mu 4t \).
- Now, solve for \(\mu \) as \(\mu’ = \frac{2}{t} \mu \)
- Hence, we get \(\mu = t^{2} \).
- Now multiplying \(\mu \) in the equation, \(t^{2} y’ + t^{2} .\frac{2}{t} y =t^{2} .4t \).
- Re-writing the equation in terms of derivative, \( (t^{2}y)’=(t^{4})’\).
- Integrating both sides of the equation, \( (t^{2}y-t^{4})’=c\).
- We get, \(y(t) = \frac {c}{t^{2}} + t^{2}\).

#### Separable Equations

In simple words, a** Separable differential equation** is any equation that can be written in the form \(y’=f(x)g(y)\).

The method of separation of variables is used to find the general solution to a separable differential equation.

Use of of Separable Equations:

1.Separable Differential Equation, \(y’=f(x)g(y)\)

2.Solution concentration, \(\frac{\text{d}x}{\text{d}y}=INFLOW \space RATE- OUTFLOW \space RATE\)

3.Newton’s law of cooling, \(\frac{\text{d}T}{\text{d}t}=k(T-T_{s})\)

Refer Applications of ODE section for detailed explanation.

__Problem 5:__ To find all solutions y to the differential equation \(y’= \frac {t^{2}}{1-y{2}}\)

- Equation can be re-arranged as \((1-y^{2})y’ = t^{2}\)
- The functions \(f\) and \(g\) are defined as\(f(y)=(1-y^{2})\), \(g(t)=t^{2}\)
- Integrating with respect to \(t\) on both sides of the differential equation, \( \int (1-y^{2}(t))y'(t)dt= \int t^{2} dt+c, \) where \(c\) is any constant.
- The substitution is \(y=y(t)\), \(dy=y'(t)dt\)
- We can write it as, \( \int (1-y^{2}(t))y'(t)dt = \int (1-y^{2})dy\)
- Therefore, \(y(t)-\frac {y^{3}(t)} {3}= \frac {t^{3}} {3} +c \)

given in

__IMPLICIT FORM.__