Means

Gheorghe Toader , Iulia Costin , in Means in Mathematical Analysis, 2018

2.2.13 Complementariness with respect to the identric mean

Some general necessary conditions for a mean N to be the complementary of another mean M with respect to a given mean P were determined in Costin and Toader (2008a).

Theorem 69

If the mean P has continuous partial derivatives of any order, with P b ( 1 , 1 ) 0 and the mean M has the series expansion

M ( 1 , 1 x ) = 1 + c 1 x + c 2 x 2 + c 3 x 3 + ,

then the series expansion of the complementary of M with respect to P, thus

N ( 1 , 1 x ) = 1 + d 1 x + d 2 x 2 + d 3 x 3 + ,

has the following first coefficients

d 1 = 1 p b ( p a c 1 + p b ) , d 2 = 1 2 p b [ p a 2 c 1 2 + 2 p a b c 1 d 1 + p b 2 ( d 1 2 1 ) + 2 p a c 2 ]

and

d 3 = 1 6 p b [ p a 3 c 1 3 + 3 p a 2 b c 1 2 d 1 + 3 p a b 2 c 1 d 1 2 + p b 3 ( d 1 3 + 1 ) + 6 ( p a 2 c 1 c 2 + p a b c 1 d 2 + p a b c 2 d 1 + p b 2 d 1 d 2 + p a c 3 ) ] ,

where

p a i b j = P a i b j ( 1 , 1 ) , i , j 0 .

Proof

If we denote

f ( x ) = P ( 1 , 1 x ) , g ( x ) = M ( 1 , 1 x )

and

h ( x ) = N ( 1 , 1 x ) ,

we have the condition

f ( x ) = P ( g ( x ) , h ( x ) ) .

Therefore we get successively

f ( x ) = P a ( g ( x ) , h ( x ) ) g ( x ) + P b ( g ( x ) , h ( x ) ) h ( x ) , f ( x ) = P a 2 ( g ( x ) , h ( x ) ) g 2 ( x ) + 2 P a b ( g ( x ) , h ( x ) ) g ( x ) h ( x ) + P b 2 ( g ( x ) , h ( x ) ) h 2 ( x ) + P a ( g ( x ) , h ( x ) ) g ( x ) + P b ( g ( x ) , h ( x ) ) h ( x ) ,

respectively

f ( x ) = P a 3 ( g ( x ) , h ( x ) ) g 3 ( x ) + 3 P a 2 b ( g ( x ) , h ( x ) ) g 2 ( x ) h ( x ) + 3 P a b 2 ( g ( x ) , h ( x ) ) g ( x ) h 2 ( x ) + P b 3 ( g ( x ) , h ( x ) ) h 3 ( x ) + 3 [ P a 2 ( g ( x ) , h ( x ) ) g ( x ) g ( x ) + P a b ( g ( x ) , h ( x ) ) g ( x ) h ( x ) + P a b ( g ( x ) , h ( x ) ) g ( x ) h ( x ) + P b 2 ( g ( x ) , h ( x ) ) h ( x ) h ( x ) ] + P a ( g ( x ) , h ( x ) ) g ( x ) + P b ( g ( x ) , h ( x ) ) h ( x ) .

Taking x = 0 , as

f ( x ) = f ( 0 ) + f ( 0 ) x + f ( 0 ) 2 x 2 + f ( 0 ) 6 x 3 + ,

but also

f ( x ) = P ( 1 , 1 x ) = 1 p b x + p b 2 2 x 2 p b 3 6 x 3 + ,

we obtain the above coefficients. □

Corollary 54

If the symmetric mean P has continuous partial derivatives up to order 3 and the mean M has the series expansion

M ( 1 , 1 x ) = 1 + c 1 x + c 2 x 2 + c 3 x 3 + ,

then the first coefficients of the series expansion of the complementary of M with respect to P are

N ( 1 , 1 x ) = 1 ( c 1 + 1 ) x [ 4 α c 1 ( c 1 + 1 ) + c 2 ] x 2 [ 24 α 2 c 1 ( c 1 + 1 ) ( 2 c 1 + 1 ) + 12 α c 2 ( 2 c 1 + 1 ) 4 β c 1 ( c 1 + 1 ) + 3 c 3 ] x 3 3 + ,

where

α = P a 2 ( 1 , 1 ) and undefined β = P a 3 ( 1 , 1 ) .

Proof

As the mean P is symmetric, we have (2.27), (2.29) and (2.31), giving the above coefficients. □

Theorem 70

If the mean M has the series expansion

M ( 1 , 1 x ) = 1 + c 1 x + c 2 x 2 + c 3 x 3 + ,

then the series expansion of the complementary of M, with respect to I , has the following first coefficients

M I ( 1 , 1 x ) = 1 ( c 1 + 1 ) x 1 3 ( 3 c 2 c 1 2 c 1 ) x 2 1 18 ( 18 c 3 12 c 1 c 2 6 c 2 + 2 c 1 3 2 c 1 ) x 3 +

Proof

Indeed, in this case

α = 1 12 , β = 1 8 .

 □

Corollary 55

For no i , j = 1 , 2 , . . . , 10 ,

F i I = F j

holds.

Proof

We have

A I ( 1 , 1 x ) = 1 x 2 x 2 12 x 3 24 + . . . G I ( 1 , 1 x ) = 1 x 2 + x 2 24 + x 3 48 + . . . H I ( 1 , 1 x ) = 1 x 2 + x 2 6 + x 3 12 + . . . C I ( 1 , 1 x ) = 1 x 2 x 2 3 x 3 6 + . . . F 5 I ( 1 , 1 x ) = 1 x 2 5 x 2 24 x 3 6 + . . . F 6 I ( 1 , 1 x ) = 1 x 2 5 x 2 24 x 3 24 + . . . F 7 I ( 1 , 1 x ) = 1 x 2 x 3 3 + . . . F 8 I ( 1 , 1 x ) = 1 x 2 + 2 x 3 3 + . . . F 9 I ( 1 , 1 x ) = 1 x + x 2 x 3 3 + . . . F 10 I ( 1 , 1 x ) = 1 x + x 2 4 x 3 3 + . . .

At least one of the coefficients from the left side F i I is different from the corresponding coefficient of the mean from the right side F j . □

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128110805000025

Double sequences

Gheorghe Toader , Iulia Costin , in Means in Mathematical Analysis, 2018

3.3 Rate of convergence of an Archimedean double sequence

In the case of classical Archimedean algorithm, we have shown that the error of the sequences ( a n ) n 0 and ( b n ) n 0 tends to zero asymptotically like 1 / 4 n . In Foster and Phillips (1984) it is proved that this result is valid in the case of A-composition of arbitrary differentiable symmetric means. For the general case, the following evaluation is given in Costin (2004).

Let us consider two means M and N given on the interval J and two initial values a , b J . Define the pair of sequences ( a n ) n 0 and ( b n ) n 0 by

a n + 1 = M ( a n , b n )  and b n + 1 = N ( a n + 1 , b n ) , n 0 ,

where a 0 = a , b 0 = b . Denote

α = M N ( a , b ) .

Theorem 74

If the means M and N have continuous partial derivatives up to the second order, then the errors of the sequences ( a n ) n 0 and ( b n ) n 0 tend to zero asymptotically like

[ M a ( α , α ) ( 1 N a ( α , α ) ) ] n .

Proof

If we write

a n = α + δ n , b n = α + ε n ,

we deduce that, as n ,

α + δ n + 1 = M ( α + δ n , α + ε n ) = M ( α , α ) + M a ( α , α ) δ n + M b ( α , α ) ε n + O ( δ n 2 + ε n 2 ) .

From (2.26) we get

(3.8) δ n + 1 = M a ( α , α ) δ n + [ 1 M a ( α , α ) ] ε n + O ( δ n 2 + ε n 2 ) .

Then

α + ε n + 1 = N ( α + δ n + 1 , α + ε n ) = N ( α , α ) + N a ( α , α ) δ n + 1 + N b ( α , α ) ε n + O ( δ n + 1 2 + ε n 2 ) .

Using again (2.26) and (3.8) we have

ε n + 1 = N a ( α , α ) [ M a ( α , α ) δ n + ( 1 M a ( α , α ) ) ε n ] + [ 1 N a ( α , α ) ] ε n + O ( δ n 2 + ε n 2 ) ,

thus

(3.9) ε n + 1 = M a ( α , α ) N a ( α , α ) δ n + [ 1 M a ( α , α ) N a ( α , α ) ] ε n + O ( δ n 2 + ε n 2 ) .

Subtracting (3.9) from (3.8) we get

δ n + 1 ε n + 1 = M a ( α , α ) [ 1 N a ( α , α ) ] ( δ n ε n ) + O ( δ n 2 + ε n 2 ) .

On the other hand, from the monotonicity of ( a n ) n 0 and ( b n ) n 0 we can assume that δ n > 0 and ε n < 0 for all n > 0 . The cases when δ n < 0 and ε n > 0 can be treated similarly. We have

ε n ε n + 1 δ n δ n + 1 = M a ( α , α ) N a ( α , α ) ( ε n δ n ) + O ( δ n 2 + ε n 2 ) [ 1 M a ( α , α ) ] ( δ n ε n ) + O ( δ n 2 + ε n 2 ) ,

thus

ε n ε n + 1 = M a ( α , α ) N a ( α , α ) M a ( α , α ) 1 ( δ n δ n + 1 ) + ( δ n δ n + 1 ) O ( | δ n | + | ε n | ) .

Replacing n by n + 1 , n + 2 , . . . , n + p 1 ( p N ) , adding and using the fact that δ n and ε n tend monotonically to zero, we obtain

ε n ε n + p = M a ( α , α ) N a ( α , α ) M a ( α , α ) 1 ( δ n δ n + p ) + ( δ n δ n + p ) O ( | δ n | + | ε n | ) .

Letting p we obtain

ε n = M a ( α , α ) N a ( α , α ) M a ( α , α ) 1 δ n + O ( δ n 2 + ε n 2 ) .

Using (3.8) we deduce that

δ n + 1 = M a ( α , α ) [ 1 N a ( α , α ) ] δ n + O ( δ n 2 )

and from (3.9) we have

ε n + 1 = M a ( α , α ) [ 1 N a ( α , α ) ] ε n + O ( ε n 2 ) .

 □

Remark 71

Some evaluations and conclusions related to the previous theorem may also be found in Foster and Phillips (1985). In the case of symmetric means, the result was proved in Foster and Phillips (1984). In this case, we saw in (2.27) that

M a ( α , α ) = N a ( α , α ) = 1 2 , α J

and we get the following.

Corollary 57

If the means M and N are symmetric and have continuous partial derivatives up to the second order, then the error of the sequences ( a n ) n 0 and ( b n ) n 0 tends to zero asymptotically like 1 / 4 n .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128110805000037

Differential Transform Method

L. Zheng , X. Zhang , in Modeling and Analysis of Modern Fluid Problems, 2017

5.1.2.2 Differential Transformation for Functions of Several Variables

Supposing that the function w(x,t ) has a continuous partial derivative, the differential transformation of the function w(x,t) is defined.

(5.3) W ( k , h ) = M ( k ) N ( h ) [ k + h q ( t ) p ( x ) w ( x , t ) x h t k ] x = x 0 , t = t 0 , k = 0,1,2,3 , ,

where w(t,x) and W(k,h) are called the original function and differential transform function, respectively. The inverse differential transform function of W(k,h) is defined as

(5.4) w ( t , x ) = 1 q ( t ) p ( x ) 0 W ( k , h ) M ( k ) N ( h ) h ! k ! ( t t 0 ) k ( x x 0 ) h .

It is analogous to the case of one variable function, M(k)     0 and N(h)     0, displayed for the transformation of the known function of the independent variable to be proportional to the integer; q(t)     0 and p(x)     0 are the respective kernels of the transformation of a known function. If q(t)   =   1, p(x)   =   1, the proportional functions M ( k ) = H 1 k and N ( h ) = H 2 h or M ( k ) = H 1 k k ! and N ( h ) = H 2 h h ! [H i   (i  =   1, 2) are called proportional constants]. When M ( k ) = H 1 k k ! and N ( h ) = H 2 h h ! , the product operations of the transformation are relatively simple. If q(t)   = p(x)   =   1 is chosen, the differential transform of the two-variable function of w(t,x) can be written as

(5.5) W ( k , h ) = 1 k ! h ! [ k + h w ( x , t ) x h t k ] x = x 0 t = t 0 , k = 0,1,2,3 , .

The inverse differential transform of function W(k,h) is written as

(5.6) w ( t , x ) = 0 W ( k , h ) ( t t 0 ) k ( x x 0 ) h ,

In the same way, we can define the differential transform of three variables or more.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128117538000050

Vector Analysis

Frank E. Harris , in Mathematics for Physical Science and Engineering, 2014

Conservative Fields

Most students are familiar, from a course in elementary physics or by experience, with the fact that if friction is disregarded the mechanical work of moving an object of mass M in a gravitational field between two nearby points at respective heights h 1 and h 2 is M g ( h 2 - h 1 ) , irrespective of the path by which the motion takes place. Here g is a constant ("the acceleration of gravity") characterizing the gravitational field. This statement corresponds to the following equation containing a line integral along the chosen path (designated C ):

(7.59) w 1 2 = C F · d r = Mg ( h 2 - h 1 ) .

Here F is the force needed to overcome gravity, which in a coordinate system with + x in the upward direction is Mg e ˆ x and d r = e ˆ x dx + e ˆ y dy + e ˆ z dz . Our equation then has the more detailed evaluation

(7.60) w 1 2 = h 1 h 2 M g e ˆ x · ( e ˆ x dx + e ˆ y dy + e ˆ z dz ) = M g h 1 h 2 dx = M g ( h 2 - h 1 ) ,

confirming that w 1 2 depends on C only through the values of h at its endpoints.

When a vector field has an integral of the type in Eq. (7.59) that depends only on the endpoints of the path, it is said to be a conservative field or (if the field represents a force) we may call the force conservative. Note that if we travel a path C in reverse, F stays the same as before but d r changes sign, so w 2 1 = - w 1 2 . Since w 1 2 is independent of the path, we will still have w 2 1 = - w 1 2 even if the forward and reverse journeys are on different paths, with the result that any closed path (one that starts and ends at the same point) will have w = 0 . That is the essence of the meaning of the term conservative.

Not all forces are conservative; in our current example we could have included friction. Because friction is always in a direction that opposes the motion, to overcome it will always require (positive) work, and travel along a closed path (even if level) will have a value of w that depends upon the path.

Once we have identified our gravitational force as conservative, we can associate with every point the amount of work needed to get there from some reference point. This process will define a scalar field ψ , which we call the scalar potential (or just potential) of our (vector) gravitational field. In the present problem we have ψ = Mgh , where the reference point is assigned the value h = 0 . It is important to notice that the zero of ψ depends on our choice of reference point, and that computations of work depend upon differences in ψ and not on their individual values. The potential has a clear physical significance; its value at x = h represents the amount of energy (positive or negative) that can be recovered by moving back to a point where x = 0 .

It is of extreme importance to note that a vector field can be identified with a scalar potential only if the field is conservative; a nonconservative force will not have fixed amounts of energy associated with specific points. An important example of a nonconservative field is the magnetic field B produced by an electric current. The integral B · d r for a loop around a current-carrying wire (see Example 7.4.7) is nonzero. (Energy conservation in physics is saved because there exist no magnetic monopoles that can capture additional energy each time they go around the loop.)

An objective of the present discussion is to identify and understand the key properties of conservative fields. Let's start by assuming a vector field F has components that are continuous and have continuous partial derivatives, and that F is conservative in a simply connected region of space. 1 The connectedness requirement is necessary to make some of the following statements true.

The fact that F has been assumed conservative has already led us to conclude that

(7.61) P Q F · d r = ψ ( Q ) - ψ ( P )

for some single-valued function ψ . Equation (7.61) is equivalent to the statement that F · d r is the differential of ψ , i.e., d ψ = F · d r .

Next, note that

(7.62) ψ x = F · r x = F x ,

and that ψ / y = F y and ψ / z = F z . In other words, F = ψ , showing that if F is conservative, it must be the gradient of some ψ .

Since F is a gradient,

(7.63) × F = × ψ = 0 ,

where we have used Eq. (7.43). This equation confirms that a conservative force has a vanishing curl.

Finally, we can use Stokes' theorem and the vanishing of curl F to conclude that

(7.64) F · d r = 0

for every simple closed curve in the region, thereby recovering our original assumption that F is conservative. The overall result of this circular discussion is that any of the conditions we have derived implies all the others.

We close this subsection with two further remarks. First, the condition curl F = 0 is equivalent to the statement that F is irrotational, and we now have the deeper understanding that this property prevents us from following stream lines of F to reach the same point with multiple values of ψ . Second, we must stress the importance of the connectedness requirement. If our region were multiply connected (an example is a torus—the mathematical equivalent of a doughnut or life preserver), some of the above equivalences could not be established.

Example 7.5.1 Conservative and Nonconservative Forces

Consider the two force fields studied in Example 7.2.12,

F = y e ˆ x + x e ˆ y and G = y e ˆ x - x e ˆ y .

In that Example, we found that × F = 0 , while × G = - 2 e ˆ z . These relationships indicate that F is a conservative force and that G is nonconservative.

Let's check by computing F · d r and G · d r for the two paths between ( 0 , 0 , 0 ) and ( x 0 , y 0 , 0 ) shown in Fig. 7.14. Identifying the individual line segments of the paths by their labels in the figure, we evaluate for F

A F · d r = 0 x 0 y ( 0 ) dx = 0 , B F · d r = 0 y 0 x ( x 0 ) dy = x 0 y 0 , C F · d r = 0 y 0 x ( 0 ) dy = 0 , D F · d r = 0 x 0 y ( y 0 ) dx = x 0 y 0 .

We have written x ( 0 ) , x ( x 0 ) , y ( 0 ) , and y ( y 0 ) instead of their respective values ( 0 , x 0 , 0 , y 0 ) to make these evaluations clearer. Combining now the integrals over segments A and B , and those over segments C and D , we find that

A + B F · d r = C + D F · d r = x 0 y 0 ,

consistent with the claim that the force F is conservative. Our calculation also reveals that a potential from which we can recover F is ψ ( x , y , z ) = xy . We can verify that ψ is a valid scalar potential for F by computing

ψ = y e ˆ x + x e ˆ y .

Figure 7.14. Two paths for the integrations in Example 7.5.1.

Continuing now with the force G, we have

A G · d r = 0 x 0 y ( 0 ) dx = 0 , B G · d r = 0 y 0 [ - x ( x 0 ) ] dy = - x 0 y 0 , C G · d r = 0 y 0 x ( 0 ) dy = 0 , D G · d r = 0 x 0 y ( y 0 ) dx = x 0 y 0 .

Combining now the integrals over segments A and B , and those over segments C and D , we get

A + B F · d r = - x 0 y 0 , C + D F · d r = x 0 y 0 ,

showing that the integral of G from ( 0 , 0 , 0 ) to ( x 0 , y 0 , 0 ) has a value that depends upon the path. This confirms that G is nonconservative and that there is no scalar potential for G.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128010006000079

Complex Variable Theory

George B. Arfken , ... Frank E. Harris , in Mathematical Methods for Physicists (Seventh Edition), 2013

Cauchy's Theorem: Proof

We now proceed to a proof of Cauchy's integral theorem. The proof we offer is subject to a restriction originally accepted by Cauchy but later shown unnecessary by Goursat. What we need to show is that

C f ( z ) d z = 0 ,

subject to the requirement that C is a closed contour within a simply connected region R where f (z) is analytic. See Fig. 11.4. The restriction needed for Cauchy's (and the present) proof is that if we write f (z) = u(x, y) + iv(x, y), the partial derivatives of u and v are continuous.

Figure 11.4. A closed-contour C within a simply connected region R.

We intend to prove the theorem by direct application of Stokes' theorem (Section 3.8). Writing dz = dx + i dy,

(11.21) C f ( z ) d z = C ( u + i v ) ( d x + i d y ) = C ( u d x v d y ) + i C ( v d x + u d y ) .

These two line integrals may be converted to surface integrals by Stokes' theorem, a procedure that is justified because we have assumed the partial derivatives to be continuous within the area enclosed by C. In applying Stokes' theorem, note that the final two integrals of Eq. (11.21) are real.

To proceed further, we note that all the integrals involved here can be identified as having integrands of the form ( V x e ^ x + V y e ^ y ) d r , the integration is around a loop in the xy plane, and the value of the integral will be the surface integral, over the enclosed area, of the z component of × ( V x e ^ x + V y e ^ y ) . Thus, Stokes' theorem says that

(11.22) C ( V x d x + V y d y ) = A V y x V x y d x d y ,

with A being the 2-D region enclosed by C.

For the first integral in the second line of Eq. (11.21), let u = Vx and v = −Vy . 2 Then

(11.23) C ( u d x v d y ) = C ( V x d x + V y d y ) = A V y x V x y d x d y = A v x + u y d x d y .

For the second integral on the right side of Eq. (11.21) we let u = Vy and v = Vx . Using Stokes' theorem again, we obtain

(11.24) C ( v d x + u d y ) = A u x v y d x d y .

Inserting Eqs. (11.23) and (11.24) into Eq. (11.21), we now have

(11.25) C f ( z ) d z = A v x + u y d x d y + i A u x v y d x d y = 0 .

Remembering that f (z) has been assumed analytic, we find that both the surface integrals in Eq. (11.25) are zero because application of the Cauchy-Riemann equations causes their integrands to vanish. This establishes the theorem.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123846549000116

Functional Equations in Applied Sciences

In Mathematics in Science and Engineering, 2005

The associativity equation

Given the associativity equation

F [ F ( x , y ) , z ] = F [ x , F ( y , z ) ] ,

by differentiating with respect to x and y, and setting y = b we can write

F 1 [ f ( x , b ) , z ] F 1 ( x , b ) = F 1 [ x , F ( b , z ) ] , F 1 [ f ( x , b ) , z ] F 2 ( x , b ) = F 2 [ x , F ( b , z ) ] F 1 ( b , z ) ,

where the subindices refer to the variable with respect to which we differentiate, and assuming F 1(x, y) ≠ 0 and F 2(x, y) ≠ 0 we get

F 1 [ x , F ( b , z ) ] F 2 [ x , F ( b , z ) ] = F 1 ( x , b ) F 2 ( x , b ) F 1 ( b , z ) .

If now F(b, z) = u can be solved for u (i.e., z = ϕ(u)), we obtain

F 1 ( x , u ) F 2 ( x , u ) = p ( x ) q ( u ) ; { p ( x ) = F 1 ( x , b ) F 2 ( x , b ) d x , q ( u ) = 1 F 1 ( b , ψ ( u ) ) d u ,

where p(x) and q(u) are strictly monotonic. This implies

[ F ( x , y ) , p ( x ) + q ( y ) ] ( x , y ) = 0 F ( x , y ) = g [ p ( x ) + q ( y ) ] ,

and substituting back into the initial equation and setting y = b, we get

g { p [ g [ p ( x ) + q ( b ) ] ] + q ( z ) } = g { p ( x ) + q [ g [ p ( b ) + q ( z ) ] ] } ,

which leads to

p [ g [ p ( x ) + q ( b ) ] ] p ( x ) = q [ g [ p ( b ) + q ( z ) ] ] q ( z ) = C ,

that is

g [ p ( x ) + q ( b ) ] = p 1 [ p ( x ) + C ] g ( u ) = p 1 [ u + C q ( b ) ] , g [ p ( b ) + q ( z ) ] = q 1 [ q ( z ) + C ] g ( u ) = q 1 [ u + C p ( b ) ] ,

or

p ( x ) = g 1 ( x ) + C q ( b ) ; q ( u ) = g 1 ( y ) + C p ( b ) .

Therefore, the associativity equation becomes

F ( x , y ) = g [ g 1 ( x ) + g 1 ( y ) + A ] ; A = 2 C q ( b ) p ( b )

and calling

f ( z ) = g ( z A ) ,

we finally obtain

F ( x , y ) = f [ f 1 ( x ) + f 1 ( y ) ] .

Therefore the following theorem holds.

Theorem 7.9

(The associativity equation).

The general local solution of the functional equation

(7.77) F [ f ( x , y ) , z ] = F [ x , F ( y , z ) ]

is

(7.78) F ( x , y ) = f [ f 1 ( x ) + f 1 ( y ) ] ,

with continuously differentiable and strictly monotonic f, if the domain of (7.77) is such that S possesses continuous partial derivatives and if F 1 (x, y) ≠ 0, F 2(x, y) ≠ 0 and F(b, z) = u can be solved for u.

Theorem 7.10

(Generalized auto-distributivity equation).

If the domain of (7.79) is such that F, G, H, M and N have continuous partial derivatives for z ≠ 0; if H 1(x, y) ≠ 0, H 2(x, y) ≠ 0, F 1(x, c) ≠ 0, M 1(x, c) ≠ 0 and N 1(x, c) ≠ 0, if M(x, a) and N(x, a) are constant and if M(x, c) = u and N(y, c) = v have unique solutions (c ≠ 0), then the general solution continuous, on a real rectangle, of the functional equation

(7.79) F [ G ( x , y ) , z ] = H [ M ( x , z ) , N ( y , z ) ] ,

where we assume GM, GN, HM and HN, is

(7.80) F ( x , y ) = l [ f ( y ) g 1 ( x ) + α ( y ) + β ( y ) ] , G ( x , y ) = g [ h ( x ) + k ( y ) ] , H ( x , y ) = l [ m ( x ) + n ( y ) ] , M ( x , y ) = m 1 [ f ( y ) h ( x ) + α ( y ) ] , N ( x , y ) = n 1 [ f ( y ) k ( x ) + β ( y ) ] ,

where g, h, k, l, m and n are arbitrary strictly monotonic and continuously differentiable functions, f(a) = 0 and f, α and β are arbitrary continuously differentiable functions.

The two sides of (7.79) can be written as

l { f ( z ) [ h ( x ) + k ( y ) ] + α ( z ) + β ( z ) } .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0076539205800104

Probability Theory

P.K. Bhattacharya , Prabir Burman , in Theory and Methods of Statistics, 2016

An Extension

Let S 1,   …, S l be disjoint open sets in R k with j = 1 l P X S j = 1 . Let g : j = 1 l S j R k , Y = g ( X ) , where for each j,   the restriction g j of g on S j is one to one with continuous partial derivatives and nonvanishing J g j . Then for all events B,

B f Y ( y ) d y = P Y B = j = 1 l P Y B , X S j = j = 1 l P X g j 1 ( B ) , X S j = j = 1 l g 1 B S j f X ( x ) d x = j = 1 l B g j ( S j ) f X g 1 ( y ) | J g 1 ( y ) | d y = B j = 1 l f X g 1 ( y ) | J g 1 ( y ) | I g ( S j ) ( y ) d y ,

where I A ( y ) = 1 if y A and = 0 if y A. Hence

f Y ( y ) = j = 1 l f X g 1 ( y ) | J g 1 ( y ) | I g ( S j ) ( y ) .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128024409000011

Classical Risk Model with Investments in a Risk-Free Asset

Yuliya Mishura , Olena Ragulina , in Ruin Probabilities, 2016

2.3.2 Examples

Example 2.1

Let Yi , i    1, be uniformly distributed on [0, 1]. It is evident that the p.d.f. of Yi is not continuous at the points y  =   0 and y  =   1. Hence, it is not differentiable at these points. Nevertheless, we now show that φ(x, t) has partial derivatives w.r.t. x and t on ℝ+ 2, which are continuous as functions of two variables.

We can rewrite [2.22] as

[2.30] φ x t = λ e λ t 0 t e λ v 0 x + c / r e r t v c / r 1 x + c / r e r t v c / r φ u v d u d v + e λ t .

Consider now three cases.

1) If x t 0 1 × 0 , 1 r ln r + c r x + c , then for all v ∈ [0, t], we have

x + c / r e r t v c / r 1 x + c / r e r t 1 0 .

Hence, we can rewrite [2.30] as

[2.31] φ x t = λ e λ t 0 t e λ v 0 x + c / r e r t v c / r φ u v d u d v + e λ t .

2) If x t 0 1 × 1 r ln r + c r x + c , + , then

x + c / r e r t v c / r 1 0 f o r v 0 , t 1 r ln r + c r x + c , x + c / r e r t v c / r 1 0 f o r v t 1 r ln r + c r x + c , t .

Hence, we can rewrite [2.30] as

[2.32] φ x t = λ e λ t 0 t 1 r ln r + c r x + c e λ v x + c / r e r t v c / r 1 x + c / r e r t v c / r φ u v d u d v + t 1 r ln r + c r x + c t e λ v 0 x + c / r e r t v c / r φ u v d u d v + e λ t .

3) If (x, t) ∈ (1, +∞)   ×   [0, +∞), then for all v ∈ [0, t], we have

x + c / r e r t v c / r 1 x 1 > 0 .

Hence, we can rewrite [2.30] as

[2.33] φ x t = λ e λ t 0 t e λ v x + c / r e r t v c / r 1 x + c / r e r t v c / r φ u v d u d v + e λ t .

Thus, we can divide the domain of φ(x, t) into three sets

S 1 = 0 1 × 0 , 1 r ln r + c r x + c , S 2 = 0 1 × 1 r ln r + c r x + c , + , S 3 = 1 , + × [ 0 , + ) ,

where it satisfies equations [2.31], [2.32] and [2.33], respectively.

Applying arguments similar to those in the proof of theorem 2.2 and using [2.31], [2.32] and [2.33], we can show that φ(x, t) is continuous as a function of two variables on S 1, S 2, S 3. Moreover, we can show that φ(x, t) is continuous as a function of two variables on ℝ+ 2. Then, it satisfies equations[2.31], [2.32] and [2.33] on

S ¯ 1 = 0 1 × 0 , 1 r ln r + c r x + c , S ¯ 2 = 0 1 × 1 r ln r + c r x + c , + , S ¯ 3 = 1 , + × [ 0 , + ) ,

respectively.

Applying arguments similar to those in the proof of theorem 2.2 and using [2.31], [2.32] and [2.33] , we can also show that there are continuous partial derivatives of φ(x, t) w.r.t. x and t on S ¯ 1 , S ¯ 2 , S ¯ 3 . Note that we imply one-sided derivatives on the boundary of these sets.

If x t S ¯ 1 , then

[2.34] φ x t x = λ e r λ t 0 t e λ r v φ x + c / r e r t v c / r , v d v ,

[2.35] φ x t x = λ 2 e λ t 0 t e λ v 0 x + c / r e r t v c / r φ u v d u d v + λ 0 x φ u t d u + λ r x + c e r λ t × 0 t e λ r v φ x + c / r e r t v c / r , v d v λ e λ t .

If x t S ¯ 2 , then

[2.36] φ x t x = λ e r λ t 0 t 1 r ln r + c r x + c e λ r v × ( φ x + c / r e r t v c / r , v φ x + c / r e r t v c / r 1 , v ) d v + t 1 r ln r + c r x + c t e λ r v φ x + c / r e r t v c / r , v d v ,

[2.37] φ x t t = λ 2 e λ t 0 t 1 r ln r + c r x + c e λ v × x + c / r e r t v c / r 1 x + c / r e r t v c / r φ u v d u d v + t 1 r ln r + c r x + c t e λ v 0 x + c / r e r t v c / r φ u v d u d v + λ 0 x φ u t d u + λ r x + c e r λ t 0 t 1 r ln r + c r x + c e λ r v × ( φ x + c / r e r t v c / r , v φ x + c / r e r t v c / r 1 , v ) d v + t 1 r ln r + c r x + c t e λ r v φ x + c / r e r t v c / r , v d v λ e λ t .

If x t S ¯ 3 , then

[2.38] φ x t x = λ e r λ t 0 t e λ r v ( φ x + c / r e r t v c / r , v φ x + c / r e r t v c / r 1 , v ) d v ,

[2.39] φ x t t = λ 2 e λ t 0 t e λ v x + c / r e r t v c / r 1 x + c / r e r t v c / r φ u v d u d v + λ x 1 x φ u t d u + λ r x + c e r λ t 0 t e λ r v × ( φ x + c / r e r t v c / r , v φ x + c / r e r t v c / r 1 , v ) d v λ e λ t .

By [2.34]–[2.37], we conclude that there are continuous partial derivatives of φ(x, t) on the boundary of S ¯ 1 and S ¯ 2 , i.e. for x ∈ [0, 1] and t = 1 r ln r + c r x + c . Furthermore, by [2.36]–[2.39], we see that there are continuous partial derivatives of φ(x, t) on the boundary of S ¯ 2 and S ¯ 3 , i.e. for x  =   1 and t ∈ ℝ+. Thus, φ(x, t) has continuous partial derivatives on ℝ+ 2 and we can show that it satisfies [2.18].

Example 2.2

Let ℙ[Yi   =   1]   =   1, i    1. We now show that φ(x, t) does not have partial derivatives on some sets on ℝ+ 2. Consider the following three cases:

1) If x t 0 1 × 0 , 1 r ln r + c r x + c , then by [2.21], we have

[2.40] φ x t = e λ t .

2) If x t 0 1 × 1 r ln r + c r x + c , + , then

x + c / r e r t v c / r > 1 f o r v 0 , t 1 r ln r + c r x + c .

Hence, we can rewrite [2.21] as

[2.41] φ x t = λ e λ t 0 t 1 r ln r + c r x + c e λ v φ x + c / r e r t v c / r 1 , v d v + e λ t .

3) If (x, t) ∈ (1, +∞)   ×   [0, +∞), then for all v ∈ [0, t], we have

x + c / r e r t v c / r 1 x > 1 .

Hence, we can rewrite [2.21] as

[2.42] φ x t = λ e λ t 0 t e λ v φ x + c / r e r t v c / r 1 , v d v + e λ t .

Thus, as in example 2.1, we can divide the domain of φ(x, t) into the same three sets S 1, S 2, S 3, where it is defined by [2.40], [2.41] and [2.42], respectively. The sets S ¯ 1 , S ¯ 2 and S ¯ 3 are also defined as in example 2.1.

If x t S ¯ 1 , then it can easily be seen from [2.40] that φ(x, t) is continuous as a function of two variables and has continuous partial derivatives on S ¯ 1 . Moreover,

[2.43] φ x t x = 0 and φ x t t = λ e λ t .

On the contrary, [2.41] and [2.42] do not imply these properties of φ(x, t) on S ¯ 2 and S ¯ 3 . Nevertheless, we make the additional assumption, which is intuitively natural, that φ(x, t) is continuous as a function of two variables on ℝ+ 2 and has continuous partial derivatives on S ¯ 2 and S ¯ 3 . Let us denote by φ 1′(·, ·) the partial derivative of φ(x, t) w.r.t. the first argument.

If x t S ¯ 2 , then by [2.41], we get

[2.44] φ x t x = λ r x + c r x + c r + c λ / r φ 0 , t 1 r ln r + c r x + c + λ e r λ t × 0 t 1 r ln r + c r x + c e λ r v φ 1 x + c / r e r t v c / r 1 , v d v ,

[2.45] φ x t t = λ 2 e λ t 0 t 1 r ln r + c r x + c e λ v × φ x + c / r e r t v c / r 1 , v d v + λ r x + c r + c λ / r φ 0 , t 1 r ln r + c r x + c + λ r x + c e r λ t 0 t 1 r ln r + c r x + c e λ r v × φ 1 x + c / r e r t v c / r 1 , v d v λ e λ t .

If x t S ¯ 3 , then by [2.42], we get

[2.46] φ x t x = λ e r λ t 0 t e λ r v φ 1 x + c / r e r t v c / r 1 , v d v ,

φ x t t = λ 2 e λ t 0 t e λ v φ x + c / r e r t v c / r 1 , v d v + λ φ x 1 , t + λ r x + c e r λ t 0 t e λ r v × φ 1 x + c / r e r t v c / r 1 , v d v λ e λ t .

However, the partial derivatives of φ(x, t) w.r.t. x and t do not exist on the boundary of S ¯ 1 and S ¯ 2 even under these additional assumptions. Moreover, the partial derivative of φ(x, t) w.r.t. x does not exist on the boundary of S ¯ 2 and S ¯ 3 .

Indeed, if x ∈ [0, 1] and t = 1 r ln r + c r x + c , then by [2.43], we have

φ x t x = 0 and φ x t t = λ e λ t

on S ¯ 1 and by [2.44] and [2.45], we have

φ x t x = λ r x + c r x + c r + c λ / r

and

φ x t t = λ r x + c r + c λ / r λ e λ t

on S ¯ 2 .

Furthermore, if x  =   1 and t ∈ ℝ+, then by [2.44], we have

φ x t x = λ r + c φ 0 t + λ e r λ t 0 t e λ r v φ 1 1 + c / r e r t v c / r 1 , v d v

on S ¯ 2 and by [2.46], we have

φ x t x = + λ e r λ t 0 t e λ r v φ 1 1 + c / r e r t v c / r 1 , v d v

on S ¯ 3 .

It only remains to note that φ(0, t)   >   0 for all t ∈ ℝ+ by [2.40] and [2.41].

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781785482182500028

Interpolation of Operators

In Pure and Applied Mathematics, 1988

Definition 4.8

A multi-index is an n-tuple v = (v 1, v 2,…, vn ) of nonnegative integers. Its length |v| is the quantity

v = j = 1 n v j .

The differential operator Dv is defined by

D v f = D 1 v 1 D 2 v 2 D n v n f ,

where Dj = ∂/∂ xj . The order of the differential operator Dv is the length |v| of the multi-index v. If h = (h 1, h 2,…,hn) belongs to R n , then the power hv is defined by

h v = h 1 v 1 h 2 v 2 h n v n .

For any multi-index v, we have

(4.15) | h v | | h | | v | .

If f is a function on R n with continuous partial derivatives of order r, then

(4.16) Δ h r f ( x ) = M r ( ξ ) | v | = r r ! v ! D v f ( x + ξ h ) h v d ξ .

To see this, let g(t) = f(x + th/|h|) and observe that

Δ | h | r g ( t ) = Δ h r f ( x + t h | h | ) .

The function g(r) is the r-th directional derivative of f in the direction h, and

g ( r ) ( ξ | h | ) = | v | = r r ! v ! D v f ( x + ξ h ) ( h | h | ) v .

Applying (4.13) to the function g(t) (at t = 0), we obtain

Δ h r f ( x ) = Δ | h | r g ( 0 ) = g ( r ) ( ξ ) | h | h r M r ( ξ ) d ξ ,

which, together with the preceding estimate, establishes (4.16).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0079816908608508

Vector Calculus

James Kirkwood , in Mathematical Physics with Partial Differential Equations (Second Edition), 2018

Surfaces

The next two integrals we discuss involve integrating over a surface rather than a curve. Like line integrals that extended integrating a function over an interval to integrating over a curve, surface integrals extend the idea of integrating over a planar region to integrating over a surface.

In developing line integrals it was fundamental that we develop an approximation for an increment of the path—a quantity we denoted Δs. Our first task with creating surface integrals is to develop an approximation for an increment of the surface. We denote this incremental element ΔS.

The simplest situation—the one that we now consider—is when the surface can be expressed z  = f(x, y). The cases where y  = g(x, z) and x  = h(y, z) are conceptually identical.

Suppose that D is a region in the x,y plane and f(x, y ) has continuous partial derivatives. Divide D into small rectangles whose dimensions are Δx and Δy. We consider the particular rectangle whose corners are (x 0, y 0), (x 0  +   Δx, y 0), (x 0  +   Δx, y 0  +   Δy) and (x 0, y 0  +   Δy). See Fig. 2.1.4.

Figure 2.1.4.

Denote this rectangle ΔA. If we project ΔA onto the surface z  = f(x, y), we get a portion of the surface that we denote ΔS. To estimate the area of ΔS, choose a point p on ΔS and construct the plane tangent to the surface at that point. It is notationally convenient to choose p  =   (x 0, y 0, f(x 0, y 0)). We project ΔA onto this plane and get a planar region we denote ΔP. We compute the area of ΔP, and this will be our estimate for the area of ΔS. The sides of ΔP are the vectors

u ˆ = Δ x i ˆ + f ( x 0 , y 0 ) x Δ x k ˆ

v ˆ = Δ y j ˆ + f ( x 0 , y 0 ) y Δ y k ˆ .

The area of Δ P = u ˆ × v ˆ . Now

u ˆ × v ˆ = | i ˆ j ˆ k ˆ Δ x 0 f ( x 0 , y 0 ) x Δ x 0 Δ y f ( x 0 , y 0 ) y Δ y | = Δ x Δ y [ f ( x 0 , y 0 ) x i ˆ + f ( x 0 , y 0 ) y j ˆ k ˆ ]

so

Δ P = u ˆ × v ˆ = Δ x Δ y ( f ( x 0 , y 0 ) x ) 2 + ( f ( x 0 , y 0 ) y ) 2 + 1 .

When we develop integrals over surfaces, this will be our incremental surface element if we can write the surface as z  = f(x, y).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128147597000028