This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
0; asymptotically stable if the zero solution is stable and if there exists <$o > 0 so that < <5o implies xt((p) —>· 0 as t —> oo. We say that the zero solution is unstable if it is not stable. The above stability properties are closely related to those of the so-called linearization of (5.3.1) at the zero solution, which is given by the linear system of delay differential equations η ή(0
= -yi(t)
+ β Σ Wijyjit - r ) ;=ι
(5.3.2)
with β = /'(0)
(5.3.3)
denoting the neural gain. Let W = [Wij] be the connection matrix and I then χ η identity matrix. It is known (see, for example, Hale [1977]) that the stability of the zero solution of (5.3.2) is intimately related to the distribution of the zeros (called characteristic values) of the following characteristic equation det[kl + I - e~kTßW]
= 0
(5.3.4)
obtained by substituting y(t) = ektc into (5.3.2) for a nonzero vector c e f , following result holds (see, Hale [1977]).
The
100
5 Signal transmission delays
Lemma 5.3.1. (i) The zero solution of (5.3.2) is asymptotically stable if and only if all zeros of (5.3.4) have negative real parts; (ii) If the zero solution of (5.3.2) is asymptotically stable, then so is the zero solution 0/(5.3.1); (iii) If there exists a zero of (5.3.4) with positive real part, then the zero solution of (5.3.2) and the zero solution of (5.3.1) are unstable. It is therefore crucial to describe the distribution of all characteristic values. The following result, due to Belair [1993], shows that these values are closely associated with the eigenvalues of the matrix W. Lemma 5.3.2. λ is a characteristic value if and only if there exists an eigenvalue d of the matrix W so that dß = (l+k)ekT. Proof This is trivial if we rewrite (5.3.4) as det[(A + \)ekT I — ßW] = 0.
(5.3.5)
(5.3.6)
In view of the above result, it is important to understand the zeros of the following transcendental equation (Z + a)ez + γ = 0 (5.3.7) for some constants a and γ. In case both a and γ are real constants, the region of stability in the plane of the parameters (α, γ) was completely described in the following result (see Hale [1977]). Lemma 5.3.3. All roots of equation (5.3.7) with real constants a and γ have negative real parts if and only if a > —1, a + γ > 0, (5.3.8) ρ sin ρ — a cos ρ > y, where Ρ = j if a = 0, or ρ is the unique root of ρ = —a tan ρ in (0, π ) if α φ 0. At least one of the zeros of (5.3.7) has positive real part if the > sign in one of the three inequalities (5.3.8) is replaced by <. To relate (5.3.5) to (5.3.7), we let ζ = λτ to get dß = (l + f ) ez, or τάβ = (τ + z)ez. In other words, (5.3.5) can be written as (5.3.7) with a = τ and γ = —τάβ after the change of variable ζ = λτ. It is easy to verify that the set of conditions
5.3 Delay-induced instability
101
(5.3.8) in Lemma 5.3.3 is equivalent to dß < 1 and
< dß with ρ = — τ tan ρ
and ρ e ( y , 7r), which is equivalent to either dß e [ - 1 , 1 ) arccos 4 j
or dß < - 1 and τ < — . ^ = 1 äßfÄjÄ
arccos
.
^
f
. ^
Consequently, if we let a{W) denote the set of all eigenvalues of W, then Theorem 5.3.4. Assume that σ (W) c Ε and r > 0. Then the zero solution of system (5.3.2) is asymptotically stable if and only i f , for every d e cr(W), either (i) ßde[-1,1),
or arccos
(ii) ßd < — 1 and τ < —T===, Vm11ϊ interval , π].
where the inverse cosine takes the value in the
The limiting cases of arbitrarily small and arbitrarily large values of the delay r are considered in the following result. Corollary 5.3.5. Assume that σ{W) C K. Then (i) the zero solution of (5.3.2) is asymptotically stable for r = 0 if and only if d < jj for each d e σ{Ψ); (ii) the zero solution of (5.3.2) is asymptotically stable for all nonnegative value of the delay τ if and only if d e [ — j , for each d e o(W). In the applications, it is important to assess the destabilizing influences of the presence of the delays in the governing equations. We say delay-induced instability is possible in (5.3.2) whenever the zero solution is asymptotically stable when τ = 0 and there exists a value τ for which the zero solution is not asymptotically stable. In view of Corollary 5.3.5, we then have the following Corollary 5.3.6. Assume that a(W) c R. Then a necessary and sufficient condition for delay-induced instability to be possible in (5.3.2) is that —
>
1 — > X max Ρ
(5.3.9)
where lmjn and Amax denote, respectively, the eigenvalues of the matrix W with minimum and maximum values. It is more problematic to perform a stability analysis for the case of a general connection matrix W with complex eigenvalues. Nevertheless, we have the following
102
5 Signal transmission delays
Lemma 5.3.7. Assume that the number b is complex in the equation λ = - 1 +be~kT.
(5.3.10)
Then (i) all zeros of (5.3.10) have negative real parts for τ = 0 if and only if Re(&) < 1; (ii) all zeros of (5.3.10) have negative real parts for all nonnegative values of the delay τ if and only if \b\ < 1 and b φ 1, where \ • \ denotes the norm of a complex number. Proof, (i) can be easily proved. To prove (ii), write σ = λ + 1 in equation (5.3.10) so that ae°T=beT. (5.3.11) If we further let σ = r + is = pew and b = Rei
τ Φe (respectively, Φ e X~~), where q : Μ minimal period ω = In ( j ^ e T — l) + In — σ and we show by direct calculations that the solution (* φ , is convergent to (—1, l ) r as t —> oo if (^>(0), ψΦ))τ G Rj and Φ = (φ, ψ)τ G Χσ'+', region Rn consists of (<^(0), ψ(0))τ with either φφ) > σ, ψφ) > σ and ψφ) > + ex - 1 or φφ) < σ, ψφ) < σ and 0, (χΦ(ΐ*), }>φ(ί*))Γ G Rj and ( * Φ ( Ί * + · ) , / » ( ' * + 0 ) Γ l [ - r , 0 ] G if (φφ),ψφ))τ G Rn and Φ = τ (φ,ψ) e U region R m consists of those above the diagonal but not in Ri U Rn, and we also show that there exists t* > 0 so that (.χ φ (ί*), }>Φ0*))Γ G Rj and (χ φ (ί* + ·), / V + ·)) Γ Ι[-τ,0]6 if G Rm and Φ = {ψ, ψ ) τ G X p + U In summary, the solution ( χ φ , >' Φ ) Γ of (5.5.3) starting from any point above the diagonal must enter the region Rj eventually and (* φ (ί* + ·), yΦ{t* + ·)) Ι[-τ,θ]£ forsomei* > 0, and therefore must be convergent to (—1, 1)^ as t —> oo. The region below the diagonal in the ( 1 is completely different from that with \σ\ < 1. In particular, we notice the role of synchronization of large threshold. Theorem 5 . 5 . 3 . Let \σ\ > 1, and let Φ = (φ, ψ)τ G Χσ. Then, as t —oo, (χΦ(ί), γφ(ί))Τ (1, l)r ifa > 1, and (χφ{ί), γφ(ί)) ( - 1 , - 1 ) Γ if α < - 1 . In the case where \σ\ = 1, we notice that the dynamics of the network undergoes change when the ratio (if σ = 1) or fyojli (if ο = —1) passes certain critical values. Theorem 5 . 5 . 4 . Let σ = (i) (**(r), y*(0) 1η(φ(0) + 1 ) + r. Therefore, h = ln(^>(0) + 1) - ln(l +σ) eXp-and(p(0)^l,· (iii) Leta = - 1 . Then, as t 00, ( * φ ( ί ) , y * ( t ) ) T ( - 1 , l)7" if either Φ € X+'+ υ X+- υ χ-or Φ e X~>+ and ψ(0) = - 1 , and (χφ(ί), y<s>(t))T -* + ( - 1 , - I f if Φ € x~' and\Jr(0) Φ - 1 . 0, we obtain φρ < Sep( Sep( (ξ-, ξ-) as t -»• oo.
e Χ£'+ U and φ(0) = = γφ(ί) = q(t) for all t > + ln[l - ip(0)] - ln(l -a))if R is a periodic function of the l).
Clearly, if Φ = (φ, ψ)τ e X is synchronous {i.e., φ = ψ) then the solution (χΦ, y 0 ) 7 : [—τ, oo) —> R 2 is synchronized, that is, χφ(ί) = y
(5.5.9)
and Theorem 5.5.1 shows that a solution of (5.5.9) with initial value in C^ is eventually periodic with the minimal period ω = In — l) + In — l) if |σ| < 1. It is interesting to note that the periodic function q and its minimal period ω are both independent of the choice of the initial value of Φ 6 X^^ with φ(0) = ψ(0). Moreover, note that ^ —> 1 as τ —> oo.
5.5 A network of two neurons: McCulloch-Pitts nonlinearity
119
Theorem5.5.2. Let \σ\ < 1 and Φ = (φ,ψ)τ G Χσ. Then (χΦ(ί), γφ(ί))Τ ( - 1 , L)R if either ψ(0) >
Τ
1
and Φ = (φ, ψ)τ e Χσ. Then (1,
l ) r if any one of the following cases occur: (a) Φ G X-'~; (b) Φ G X+<+ and e~τ < < e*; (c) Φ G X~'+ and ψφ) = 1; (d) Φ G X+- αηάφ{0) = 1;
120
5 Signal transmission delays
(ii) (χ φ (ί), }> φ (ί)) Τ aand na
( - 1 , 1 ) Γ if either Φ G X~'+ and ψφ) φ 1 or Φ G X+'+
ψ(0)+1
T ?(0)+l > ee '·
(iii) (* φ (ί), (1, - 1 ) Γ if either Φ e X+~ αηάφ{ 0) φ 1 or Φ G X+' + »(Q)+i < e e-r aand na p(0)+l ^ · Theorem 5.5.5. Let σ = — 1 and Φ = (ψ, ·ψ)τ G Χσ. Then vT
(i) (x*(t),y*(t)) 0 or Φ
G
χ-~
^
( - 1 , - 1 ) τ ^ ΐ ^ Φ e X+' + and[0(O) + l][r/r(O) + l] φ
and e~z <
< ex;
(ii) (χφ(ί), >,Φ(θ)Γ -* (—1, l)r if any one of the following cases occurs: (a) Φ € X--+; (b) Φ e
and
(c) Φ G Xp+and (iii) (λ:φ(0, y*(0)T (a) Φ
G
<
;
φ(0) = - 1 ;
~~* (1- — 1)Γ if any one of the following occurs:
X+'~;
(b) Φ G X-'~ and
>
^
(c) Φ G X+'+ and f(0) = 1. Regarding the threshold σ as a bifurcation parameter, we show in the above results that as σ moves from the interior of interval (—1,1) to its exterior, the dynamics of model equation undergoes transition from either ultimate periodicity and synchronization or convergence to asynchronous equilibria (1, — 1) T and (— 1,1) T to synchronization and convergence to synchronous equilibria (1, l ) r and (—1, —l) r . These results seem to indicate that large threshold is the main source for synchronization in the network discussed. We now provide the proofs of the above results. We first note that, by solving (5.5.3) on intervals [0, τ], [r, 2 r ] , . . . successively for each Φ = (φ, ψ)τ G Χσ, the unique solution ( χ φ , >>φ)7 : [—τ, σο) —> Μ2 satisfies (i) ( χ φ , )>Φ)Γ : [—τ, οο) -»• Μ2 is continuous; (ii) χΦ(ί) and γΦ(ί) are differentiate for all t > 0 such that [x(i - r ) - a][y(t - r) - σ] φ 0;
(iii) (χφ(ί), γφ(ί))Τ
satisfies (5.5.3) for all t > 0 with [x(t - τ) - a][y(t - τ ) - σ ] φ 0 .
5.5 A network of two neurons: McCulloch-Pitts nonlinearity
121
Note also that, by the variation-of-constants formula, on the interval [/*, t], the solution of the equation ζ = -ζ
—1
is given by z{t)
= e'^Hzit*)
+ 1] - 1
or *(*> + 1 z(t*) + 1
e-(t-n
=
and the solution of the equation z = - z + 1
is given by Z(t)
= e-^Hzit*)
or
z(t)-]_
1] + 1
e_(t_r)
=
- 1
Z(t*)
Consequently, each ( * φ , >>φ) following straight lines:
-
: [0, oo) —» R 2 moves alternately along one of the
x(t)
+ 1 _
x(t*)
+ 1
y(t)
+ 1 "
y(t*)
+ 1'
xit)
+ 1 _
x{t*)
+ 1
y(t)
-
1 ~
y(t*)
-
1'
x{t)
-
1 _
x(t*)
-
1
y(0
-
ι ~ y(t*)
-
x(t)
-
1 _
x(t*)
-
y(t)
+ 1 ~
y(t*)
+ 1'
l'
1
In other words, a trajectory is a broken line. As an illustrative example, we consider Φ = (φ, f ) T e X + ' + with |σ| < 1 and
=
ψ%+\
until
= ( χ Φ 0 ) , y
'ι +
< φ(θ)+ι
moves along the straight line
<
=
where t\ is the first zero of (jc — σ ) 0 - σ) in [0, oo),
then it moves along the line
until t2 + τ, where t2 is the second
=
zero of (x — a)(y — σ) in [0, oo), and then it moves along the line until t3 + r, where t^ is the third zero of (λ — σ)(γ towards ( - 1 , l ) r along the line
=
=
— σ) in [0, oo), and then it moves
122
5 Signal transmission delays
5.5.1. We only consider the case where Φ — {φ,ψ)τ e The case where Φ = (φ, ψ)τ € can be dealt with analogously. Using equation (5.5.3), we can easily obtain that χΦ(ί) = γ φ ( ΐ ) for all t > 0. Therefore, it suffices to show that the solution χ of the equation Proof
of Theorem
χ = - x + f{xit -
τ))
(5.5.10)
with the initial condition χ |[_ r q]= φ € C+ is eventually periodic with the minimal period In { f e e * - l) + In ( j ^ e * - l). Let t\ be the first nonnegative zero of χ — σ. Then for t e (0, t\ + τ), except at most finitely many points, we have χ = —χ —
1,
from which and the continuity of the solution it follows that x(t)
= e-'[
t e [ 0 , t\ +
+ l ] - l ,
r]
and, in particular, Λ(ί0
=e-,1[^(0) + 1 ] - 1 = σ .
This implies t\ =1η[^(0) + 1 ] - 1 η ( 1 + σ ) and x(
h
+ τ) = e~(tl+T)(
+ 1) -
+ (1 + a)e~T
1 = -1
< σ.
Also χίι+τ(θ)
:= x(ti
+ τ + θ) = e-^+z+eH
= (1 +σ)-
(Γ+)
+ 1) -
1
- 1< σ
for θ e (—r, 0], which means Jt i l + r e C~. To move to the next step where we explicitly construct a solution of (5.5.10) beyond [0, t\ + τ], we start with a new initial value φ* = x f l + r . Let ?2 be the first zero after t\ of χ — a. Then t2 > h + τ and on (t\ + τ, ti + t ) , we have χ
=
—χ
+ 1
and hence x(t)
= e~^-T)[x(ti
+ τ) -
1] + 1
= [ίί>(0) + l ] ( l — γ-j·—er\e~' V 1+ σ /
+
1
for t e [ti + τ, t2 + τ]. In particular, X(t2)
= (φ(0)
+
l)(l - —
+
1= σ
123
5.5 A network of two neurons: McCulloch-Pitts nonlinearity
implies t2 = ln(p(0) + 1) + In
- l ) - ln(l - or).
Also, Λ
χ,2+τ(θ)
= x(t2
+ τ + θ) = =
(φφ)
+ l)(l -
Y^e
1 - (1 -
T
)e~
( t 2 + T + e )
+ 1
> σ
for θ € (—r, 0], which means χ(2+τ € C+. Repeating the above arguments inductively, we can show that there exists a sequence {i„} (n = 1, 2 , . . . ) such that h = ln(p(0) + 1) - ln(l + σ), t2 = ln(?(0) + 1) + I n t2n-l
~
= t2n-3 + In
t2n = tln-2 x{tn)
=
x(hn-\
~
~
+ In ( j ^ e *
ln
+
ln(1
σ)
( f h ^ ~
l ) + In ( j ^ e *
-
~
' n
' '
n
^ -
2
> 2
'
σ, + r ) = - 1 + (1 +
σ)β~τ,
-a)e~T,
x(t2n
+ T) = 1 - (1
x(t)
= (
X(t)
= g-U-^-i-*\x{t2n-\ + τ) -
-
1,
ί€[0,ίι+τ],
= - ( , ( 0 ) + 1)
1] + 1
- ι ) " " 1 e - ' + 1,
- l)" t
x(t)
~
= e-^"'T\x(t2n
+ Τ) + 1] -
€
1 + τ, ί 2η + τ],
1
t € [t2n + τ, t2n+1
+ τ].
See Figure 5.5.2. Consequently, we can see that χ(t) is periodic for t > t\ + τ, and that its minimal period is ω =
tln+\
-
tln-\
-1'2"-1+ln ( ΐ Τ Τ ' -
+h
0
,2
- 0] " "-'
124
5 Signal transmission delays
Figure 5.5.2. The solution (5.5.3) with |σ| < 1 and the initial value (φ, ψ)τ Xp+ and
€
This completes the proof. Proof of Theorem 5.5.2. We only give the proof for the case of ψ(0) >
V·0·11'
which imply that χτ (θ) = χ(τ + Θ) < σ and yT(9) = y{t +Θ) > σ for θ G (—τ, 0), and so χτ G C~ and yT G C+. Repeating this argument on [τ, 2τ], [2τ, 3τ], . . . , consecutively, we obtain that xt G C~ and yt G C+ for all t > 0. Therefore, (x(t), y(t))T satisfies (5.5.6) for almost all t > 0, from which it follows that (5.5.11) holds for all t G [0, oo), and hence (x(t), y(t))T -> (— 1, l ) r as t oo. Case 2. We now consider the case where φ, ψ G C+ and ψ(0) > φ(0)βτ +eT — 1. The idea is simple, namely, we show that if t\ is the first zero of (χ — σ)(γ — σ) in [0, oo) then jc il+T G C~ and y / l + T G C+, then the conclusion follows from Case 1. By the initial condition and (5.5.4), (x(t), y(t))T must satisfy system (5.5.5) for t G [0, τ] except at most finitely many t. It follows that for t G [0, τ] i x ( 0 = C^(0) + I)«"' - 1, \y(t) = 0) + 1 ) β - ' - 1 .
(5.5.12)
125
5.5 A network of two neurons: McCulloch-Pitts nonlinearity
Let t\ be the first zero of (χ — a)(y — σ) in [0, oo). Then (x(t), y(t))T satisfies system (5.5.5) for t e [0, t\ + τ] except at most finitely many t, and so expression (5.5.12) holds for t e [0, t\ + τ]. On the other hand, = [(φ(0) + 1)β~ί-1-σ][(ψ(0)
(x(t)-a)(y(t)-a)
+ \)β-(-l-σ]
= 0 (5.5.13)
implies t = ln(^(0) + 1) - ln(l + σ)
or
t = ln(^(0) + 1) - ln(l + σ).
In view of ^(0) >
and
h+τ
< ln(^(0) + 1) - ln(l + σ).
This, together with (5.5.12), implies that *(ίΐ + τ) = - 1 + (1 + σ)β~τ Φ(0) + 1 Ψ y(ti + r) = -1 + χ,ι+τ(θ)
< σ,
; ' (1 + σ)e φ(0) + 1
= χ(ΐχ +τ
> σ,
+ θ) = (y>(0) + 1)β~(ίι+τ+θ)
= -1 + (1 + σ)β~(τ+θ)
< σ
- 1
for θ € (—r, 0]
and yn+ΑΘ)
= y(n +τ+θ)
= ( f ( 0 ) + l)e~(ti+T+9) θ
> - 1 + (1 +σ)β~
> σ
- 1
for 0 e [—τ, 0).
This shows that xt] +T e C~ andy^+r € C+. So, by Case 1, we have (x(t), (—1,1) T as t oo.
y(0)T
Case 3. We next consider the case where φ, ψ e C~ and^(0) < ψ(0)ετ —eT +1. For this case, the proof is similar to that of Case 2, and thus is omitted. Case 4. We now consider the case where φ, ψ e Cj~ and φ(0) < ψ(0) < τ
β φ(0) + eT - 1. Let
_
+ 1 ^(0) + 1 " T Then
1 < β < eT. Using a similar argument as that in Case 2, we can show that (jc(r), y(t))T satisfies (5.5.5) for t € [0, t\ + τ] except at most finitely many t, where t\ = ln[^(0) + 1] — ln(l + o),
126
5 Signal transmission delays
and that (5.5.12) holds for all t e [0, t\ + r]. This, together with the fact that 1 < β < eT, implies that + τ) = - 1 + (1 + σ )e~T < σ, and
y(ti + t ) = - 1 + β (I + a)e~T < σ,
y(fl) = (f(0) + l)e~tl - 1 = - 1 + 0 ( 1 + o r ) > σ.
Therefore, there exists t2 e (ii, t\ + τ) such that y(t2) = σ. Clearly, t2 is the second zero of — a)(y — σ) in [0, oo). From (5.5.13) we can get t2 = ln(^(0) + 1) - ln(l + σ) = ij + ln/3. By(5.5.12), wehavex(i —τ) < σ andy(t — τ) > σίοτί e (ti+T,t2 + r). Itfollows from (5.5.3) and (5.5.4) that (x (t), y(t))T satisfies system (5.5.6) fori € (t\+x, t2+r). Therefore, for t e [t\ + τ, f2 + t], we have x(t) = e-^-tl~T)[x(ti y(t) = e-C-'i-^lyOi
+ r) + 1] - 1 = (1 + a)e~^-tl)
- 1, T
+ r) - 1] + 1 = [(1 + σ)β - 2e ]e~+
1. (5.5.14)
Hence, 1+ σ x(t2 + r) = -l + —— e~T ß
and
y(t2 + τ) = 1
2eT-(l ß
+σ)β -e
T
.
Note that t2 + τ < t\ + 2τ, and that for t e (t2 + τ, t\ + 2r) we have x(t - τ) = (φ(0) + l)e~ ( t ~ T ) - 1 < (
y(t - r) = (f(0) +
- 1 < (^(0) + l)e~'
< σ,
- 1 = σ.
So (x(t), y(t))T must satisfy system (5.5.7) for t e (t2 + r, t\ + 2r). It follows that for t e [t2 + τ, t\ + 2r] we have x(t) = e- ( t - t 2 ~ T ) [x(t 2 + τ ) - 1 ] + 1 = [ 1 + σ - 20e r ]- ( f - i l ) + 1, y(t) = e-C-'i-^iyit2
+ τ) - 1] + 1 = [(1 + σ)β - 2eT]e-^-^
+ 1. (5.5.15) Let t3 be the third zero of (* — σ)(γ — σ) in [0, oo), i.e., the first zero after t2 of (χ — σ)(γ — σ). Then (x{t), y(t))T satisfies system (5.5.7) for t e (t2 + r, /3 + r), and (5.5.15) holds for alii 6 [ί2 + τ,ί3 + τ], On the other hand, from (5.5.14), (5.5.15) and the fact β > 1, by solving the equation (x(t) — a)(y(t) — σ) = 0, we can get t3 = ti+ ln(2eT - (1 + σ)β) - ln(l - σ). This, together with (5.5.15), implies that x(t3 + τ) = 1 - l ßT f 2e - (1 + σ)β
- σ)e~T
and
y(t3 + r ) = 1 - (1 -
σ)β~τ.
5.5 A network of two neurons: McCulloch-Pitts nonlinearity
127
If < β < er, then χ + τ) < σ, moreover, from (5.5.14) and (5.5.15), it is easy to see that χί3+τ(θ) = x(ts + r + θ) < σ and y f3+T (0) = y(t$ + τ + θ) > σ for θ € (—r, 0). This shows that ·χ ί3+τ e C~ and y, 3 + r g C+. So, by Case 1, we have (jc(f), y(t))T ( - 1 , l ) r as t oo. See Figure 5.5.3. y
Figure 5.5.3. The trajectory of (5.5.3) with |σ| < 1 and the initial value (φ, ψ)Τ € Xp+
with 1 < β < eT and
< β < eT, where β denotes the ratio
t i S l l ! , Pi represents the point (*(*,· + r),
+ r)) for i = 1, 2, 3.
We next consider the case where 1 < β <
- For this case, we have
x(ti + r) > σ. Note that x(t2 + τ) = — 1 + t4 € (t2 + τ, t·} + τ) such that x f a ) = σ, that is, [1 + σ - 2ßeT]e-^~tl)
< σ, therefore there exists
+ 1 =
σ.
This, together with 1 < β < ' f f f f i f , implies that t3 < t4 = ti + \n[2ßer
-
( 1 + σ ) ] - l n ( l - σ) < t3 + τ .
Clearly, is the fourth zero of (χ — σ)(γ — σ) in [0, oo). Again according to (5.5.14) and (5.5.15), we have x(t — τ) < σ and y(t — z) > afort e (/3 + τ, f4 + t). Itfollows from (5.5.3) and (5.5.4) that (x(t), y(t))T satisfies system (5.5.6) fori e te+τ, tt+τ).
128
5 Signal transmission delays
Therefore, for t e
+ τ, fy + τ] we have x(t) = + r) + 1] - 1 _ 4^-4^+l-V (,_„) _ 1 — e A> 1—σ
(5.5.16)
= [(1 + σ)£ - 2 e r ] e - ( i " f l ) + 1. In particular, x(t4 + r) = -1 + and y(i 4 + τ) = 1 -
4e
- 4ßeT + 1 - σ< 2ße* - (1 + σ)
2eT - (1 + σ)β 2βετ - (1 + σ)
(1
-σ)β~τ.
Note that t4 + τ < ts + 2τ, and that from (5.5.15) we have x(t — r) > σ and y(t — τ) > σ for t € (t4 + τ, tj, + 2τ). Consequently, (x(t), y(t))T satisfies system (5.5.5) for t G {t4 + r, tj + 2r). Therefore, for t e[t4 + τ, tj + 2τ] we have x(t) = e-^-k~T)[x(t4 + τ) + 1] - 1 _ 4e2r_4ßer + 1_a2 ((_u) e A' — l—σ y(t) = e-^~THy(t4 + r) + 1] - 1 2r T _ Aße —4e +(1—σ2)β —fr—fi) _ ι — e A1—σ
(5.5.17)
Let ts be the fifth zero of (jc — a)(y — σ) in [0, oo), i.e., the first zero after t4 of (x — a)(y — σ). Then (x(t), y(t))T satisfies system(5.5.5)fori e (ί4 + τ, ts + r), and so (5.5.17) holds for all t e [ί4 + τ, ί 5 + τ], On the other hand, from (5.5.15), (5.5.16), (5.5.17) and the fact that β > 1, by solving the equation (x(t) — a)(y(t) — σ) = 0, we can obtain t5 = h+
ln[4elT - 4ßez + 1 - σ2] - ln(l - σ 2 ).
This, together with (5.5.17), implies that x(ts + r) = -1 + (1 and y(f5 + T) = - 1 + ff
^ ß
<
+σ)β~τ
4ße2x - 4eT + (1 -
σ2)β
4e2z - 4ßeT + 1 - σ2
(1+aJe-
1 3 W 1 ' t h e n '5 > t3 + r and y(t5 + r) > σ. Moreover,
from (5.5.16) and (5.5.17), it is easy to see that * , 5 + r ( 0 ) = x(ts + τ + θ) < σ and
5.5 A network of two neurons: McCulloch-Pitts nonlinearity
129
(-1,1)
Figure 5.5.4. (
The trajectory of (5.5.3) with |σ| < 1 and the initial value
X + ' + with \ < β < ex and ^ g f f i
e
where β denotes the ratio y(ti + τ)) for i = 1, 2, 3, 4, 5.
>V5+T(0)
=
y(ts
+ τ + θ)
< β <
, Pi represents the point (*(*,• +
τ),
> σ
for θ € (—τ, 0). This shows that * ί 5 + τ € C~ and >V5+T e C+. So, by Case 1, we have (x(t), y ( 0 ) r ( - 1 , l ) r as t oo. See Figure 5.5.4. It now remains to consider the case where 1 < β < we have y(ts + τ) < σ. Again note that β > 1 implies y(t4
+ τ) = 1 -
2eT — (1 + σ)β η τ /ι ι T 2ße - (1 + σ )
. For this case
> 1 - (1 - σ)β~τ
-
Thus, there exists te € (ί4 + τ, ts + r) such that y(te) = o, that is y(te)
=
AßelT
-
4 e r + β{\
σ2)
-
-te-ti)
_ ι
=
σ
.
1 - σ
This implies that t6 = ti + ln[AßelT
Let
- AeT + (1 40e2r 4€2τ
- 4eT _
Αβετ
σ2)β]
-
σ2)β
+ (1 +
1 _
1η(1
σ
2
-
σ2).
>
σ.
130
5 Signal transmission delays
Then y(t5 + r) =-l
+ (l+a)F(ß)e~T
and
t6 = ts + In F(ß).
Note that x(h + r) = x(t5 + r) = - 1 + (1 + σ)β~τ. Replacing β and t\ by Fiß) and ts, respectively, and applying the same procedure as above, we can obtain the following: (I) For t e (ts + r, t(, + τ), (x(t), y(t)) T satisfies (5.5.6). Moreover, x(t) = (1 + σ)έΤ('-'5) - 1, y(t) = ((l+a)F(ß)
-
+ 1,
x(t6 + r) = -1 + and y(te + τ) = 1 '
e
F(ß)
.
(II) For t e (te + r, ίη + τ), where ίη = ts + ln(2e r - (1 + a)F(ß))
- ln(l - σ),
(x(t), y(t)) T satisfies (5.5.7). Moreover, x ( 0 = (1 + σ - 2 F ( ß ) e T ) e - ( t ~ ^ + 1, y(t) = ((1 + a)F(ß)
x
- 2 e
ritn-^τΛ2F(ß)e*-α+σ) n l Χ(ίη + τ) - 11 - 2e*-{\+a)F(ß)i
_
) e + 1, °>, e
T
and y(i 7 + T) = l - ( l - a ) e - r . 3T 2T /τττ^ Tf 8e4T+(—2σ2 +8σ+18)e2*+(1 +σ)2 (1 —σ ) ^ R . 4g +(5-aV ,· ^ l+q+2e 2 Ρ < 2τ 2 τ (20+4σ)^+(1+σ)(-σ -2σ+7)^ 8ί +1-σ ' (3+σ> F(yS) < eT, then x(tj + τ) < σ. Moreover, xtl+T € C~ and e C+. Therefore, by Case 1, we have (x(0, y(t))T —> (— 1, l ) r as ί —> oo. See Figure 5.5.5.
ilV) If 1 < Β < 8^+(-2σ2+8σ+18>2Γ+(1+σ)2(1-σ) · UVj Π 1 < ρ < (2ο+4<τ)^+(1+σ)(-σ2-2σ+7)^ ' then jc(?7 + τ) > σ and tl
1ι <
Ff ^ ^ \Ρ>
+ τ > ί 8 = ί5 + ln(2F(^)e r - (1 + σ)) - 1η(1 - σ).
<
1+σ+2^2τ (3+σ)β* '
5.5 A network of two neurons: McCulloch-Pitts nonlinearity
Figure 5.5.5. The trajectory of (5.5.3) with |σ I < 1 and the initial value (φ,ψ)τ +
Xp
x
T
with 1 < β < e and
< F(ß) < e , where β denotes the ratio
trntt! > pi represents the point (*(*,· + τ), y^ + τ)) for i = 1, 2, . . . , 7.
For ί g (i7 4- r,
+ r ) , (x(t), y ( i ) ) r satisfies (5.5.6). Moreover, _ A f - t F W
+ l - . * ^
_ ,
1—σ y(0
= [(1 + σ)Γ(β)
jc(r 8 + τ ) = - 1 +
e
- 2eT]e~(t~t5)
+ 1,
4e2T - 4F(ß)eT + 1 - σ 2 _ — —
and y(t% + τ ) = 1 - — — — T (1 - a)e 2 F ( ß ) e - (1 + σ )
\
(V) For t g (?8 + τ, ^ + τ ) , where t9 = t5 + l n ( 4 e 2 r - 4 F ( / 3 ) e r + 1 - σ1) - l n ( l - σ 2 ) ,
132
5 Signal transmission delays
(jt(i), y ( t ) ) T satisfies (5.5.5). Moreover,
x(t)
=
y(t)
=
4elT - 4F(ß)eT
+ 1-
σ2
1—σ
χ(ί9 + τ) = - 1 + (1
_ 11
(t-h)
1-σ zt „2r „τx +ι / ι(1 ——2\ 4F(ß)e — λAe az)F(ß)
e
9
_ . v (f•h)
1,
+a)e~\
and y(t9 + T) = -l where F2(ß) r V | , Tf V J
=
(l+a)F2(ß)e-z,
+
F(F(ß)).
16gSr+8(7—a2)g3T+(l—σ2)(9—g2)gT < a 48e4r +16(2—σ2)β2τ +(1 -σ 2 ) 2 - P
8e 4 r +(-2a 2 +8g+18)e 2 r +(l+a) 2 (l-g) (20+4σ)β3τ+(1+σ)(-<τ2-2σ+7)βτ '
< 1 5 4 $ ^ ' then + T ) - σ ' m 0 r e 0 V e r ' *'9+τ ^ C " a n d y i 9 + r e C + . Therefore, by Case 1, we have (x(t), y(t))T — ( — 1 , l ) r a s i —• oo. See Figure 5.5.6.
(-1,1)
(
(l
l) ß
Figure 5.5.6. The trajectory of (5.5.3) with |σ| < 1 and the initial value (φ, ψ)Τ e X+'+ with 1 < β < e* and denotes the ratio 2,...,9.
< Πβ) <
where β
' ^ represents the point (jc(f,· + r), y(ti + τ)) for i = 1,
133
5.5 A network of two neurons: McCulloch-Pitts nonlinearity
3T 3 2 τ rvmTf 1 Λ
and y(tg + τ) < σ. For this case, we again repeat the above procedure. We now claim that for any β e (1, 0 such that xt*+TeC~
and
yt*+TeC+.
(5.5.18)
Therefore, from Case 1, we have (x(t), y(t)) T (—1, l ) r as t ->· oo. Suppose, by way of contradiction, that this claim is not true. Then the above procedure can be repeated inductively to generate a sequence {tn}^iLi such that tn is the n-th zero of (χ — o)(y — σ) in [0, oo). Moreover, ti = ln(
Un- 2 =
t4n-1 = t4n-3 + ln(2eT - (1 + a)Fn~\ß)) n l
T
t4n = t4n-3+ln(2F - (ß)e t4n+l
= Un-3 + ln(4e
2r
- ln(l - σ),
- ( 1 + σ ) ) - ln(l - σ), n
- AF ~\ß)eT
+ 1 - σ 2 ) - ln(l - σ 2 );
x(ti + τ) = - 1 + (1 + σ ) β ~ τ ,
y(fi+T)
-l+ß(\+a)e~T-
=
x(t4n-2 + τ) = -1 + „ , , . y(t4n-2 + t ) = 1
1+σ Fn~Hß)'
2e r — (1 + a)Fn~^{ß)
*
_τ ;
2Fn~l(ß)eT (1+σ) (1-σ)β~τ, x(t4n-1 + τ) = 1 T n 2e - (1 +a)F ~Hß) yifAn-1 + T ) = 1 - (1 -a)e~T· Ae JC (t4n + τ) = - 1 +
— AF (ß)e + 1 —
J X(t4n+i + τ) = - 1 + (1 + σ)ε~τ, \y(t4n+ι+τ) = — 1 + (1 + a)Fn(ß)e~T; ( x(t) =
=
(l+a)ße-«-^-l,
r
\
134 x(t)
5 Signal transmission delays = (1 +a)e~(f_,l)
- 1,
y(f) = [(1 + σ)β - 2e*]e-«-V
+ 1,
x(t) = [1 + σ - 2Fn-liß)eT]e-^-^ y(t)
'
= [(1 + a)F»~Hß)
€
'
=
+
τ];
+ 1,
^ f e
~
+
_ 4g2r — 4Fn~l(ß)eT + 1 — σ2 1—σ n l y(i) = [(1 + a)F ~ (ß) - 2eT]e~^-t4"-3)
1
^ +
1>
^„-2
τ
^
+ r, ί4»-ι + τ],
^ _ t € [ίφ,-ι + τ, t4n + τ]; + ι,
~σ - 4eT + (1 -
t e [Un + τ, f4«+i + τ]; _ h
1—σ
\ x{t) = (1 + I y(i) = [(1 + < r)F»(/J) -
- l, + 1,
'
G [ 4w+1
'
+ τ ' '4<M+1>-2 +
where η = 1 , 2 , F ° 0 8 ) = F1^) = and F n (/3) = F(Fn~\ß)) η >2. Furthermore, it is necessary that, for all η = 1 , 2 , . . . , x(t4n-i +τ) > σ,
i.e.,
τ];
for
ί4«-ΐ + τ > ί 4η
(5.5.19)
ί4η+ι + τ > ί 4 (η+ΐ)-2·
(5.5.20)
and y(Un+\ + τ) < σ,
i.e.,
Otherwise, if «ο is the first positive integer such that (5.5.19) or (5.5.20) does not hold, then a similar argument as above shows that (5.5.18) holds for t* = i 4 « 0 -i or t* = i 4 „ 0 +i, respectively, which means the above claim is true. From the above expressions of tn and jc (tn + τ), it is clear that (5.5.19) is equivalent to „ , 1 + σ + 2e2z n F ~\ß)< , η = 1,2,... (5.5.21) w (3 + σ )ετ and that (5.5.20) is equivalent to Fn(ß)<eT,
n = 1,2,....
(5.5.22)
On the other hand, after some simple calculations, we have d
dß
rrm (ß)
[4g2r ~ (1 + σ)2] [4e2r — (1 — σ)2] " (4e2r — 4ßeT + 1 — σ 2 ) 2 > °
(5 5 23)
· ·
and d .. 32ße2*(e* - β) + \6e2*{ß2 - 1) + 8(1 -(F(ß)-ß) = —γ 2^2 aß (4elr — 4ßex + 1 — σιΥ
a2)ße*
>0
(5·5·24)
5.5 A network of two neurons: McCulloch-Pitts nonlinearity
135
for all 1 < β < eT. (5.5.23) and (5.5.24) imply that F(ß) and F(ß) - β are strictly increasing on (1, eT). Since lim (F(ß) - β) = 0, β—* ι we have F(ß) — β > 0 for all β e (1, e r ) which, together with the monotonicity of F(ß) and F(ß) - β, implies that F"(ß) > Fn~\ß)
and Fn+l(ß)
for all β 6 (1, eT) and « = 1,2,
> Fn(ß) -
Fn~\ß)
Therefore, we have
F"(ß) = ß+ > =
- Fn(ß)
ß + n
Σ
(Fi(ß)-Fi~1(ß))
ß+n~(F(ß)-ß) 2 Mß ~l)e* T T ' Ae (e -ß)+l-a?
for all β € (1, eT) and η = 1 , 2 , . . . . Note that It follows that lim Fn(ß) = oo
> 0 for β e
(l,eT).
n—> oo T
for all jÖ € (1, e ), which contradicts (5.5.21) and (5.5.22), and hence the above claim is true. Case 5. We finally consider the case where φ, ψ £ C~ and ψ(0) > φ(0) > eT ψ(0) — eT + 1. For this case, the proof is similar to that of Case 4, and thus is omitted. This completes the proof of Theorem 5.5.2. The essential part in the above proof is the connection between (5.5.3) and a onedimensional map. This will be used later in our discussions on transient behaviors. Similar arguments are also applied to show that delay-induced oscillation can be prevented by increasing the threshold or by decreasing the synaptic connection in the following frustrated network x = -x + f(y(t _y = - y - f(x(t
- r)), - τ)),
with / given in (5.5.4). In particular, we establish the existence of stable limit cycle in (5.5.25) if |σ| < 1: the existence of such a cycle is independent of the size of τ though the period and the amplitude of such a cycle depend on τ. We also show that the important parameter for nonlinear oscillations is not τ but σ, and the critical values of σ are ±1 in the sense that if |σ | > 1 then each trajectory of (5.5.25) converges to a single equilibrium. More precisely, we obtain Theorem 5.5.6. Let \σ\ < 1. Then there exist Φο = (φο, Ψο)τ € Χσ and Tq > 0 such that the solution ( χ φ ° ( ί ) , y®0(t))T
t > 7o. Moreover, lim f ^ oo [x
for every solution ( χ φ ( ί ) , y*(t))T
of (5.5.25) with initial value Φο is periodic for φ
(0 - χ °(ί)] = 0 and l i m ^ o o t / ^ O - * φ ° ( 0 ] = 0 of (5.5.25) with Φ = (φ, ψ)τ € Χσ.
136
5 Signal transmission delays
The proof is very similar to that given above for (5.5.3). The basic idea is to show that a typical trajectory of (5.5.25) is spiraling and rotates around the point (σ,σ). Figure 5.5.7 shows the periodic orbit in case |σ| < 1, this orbit can be obtained as follows:
Figure 5.5.7. The periodic trajectory of (5.5.25) with |σ| < 1, where (φ, ψ ) τ e X a ' + satisfies PQ =
= λο for a critical value λο· The trajectory starts at
(
(φ, Ψ)
The value
is chosen so that P4 coincides with PQ to obtain a periodic orbit. Note
a = -1 + (1 + o)e~~T, b = 1 - (1 -
a)e~x.
Choose a point Po = {φ{ϋ), ψ(0)) in the 2-dimensional Euclidean space (not the phase space) suchthat (φ(0) — σ, ·ψ(0) — σ) is in the first quadrant and (φ, ψ) e The trajectory moves first along the line connecting Po to (—1, 1) until Pi = (x(t\ + τ), y(t\ + τ)), where t\ is the first time when the trajectory meets the vertical line χ = a . Then the trajectory changes its direction and moves along the line connecting Pi to (—1, —1) until P2 = (xfa + τ), y(t2 + τ)), where ti is the first time when the trajectory meets the horizontal line y = σ. In the same way, the trajectory moves to P3 = (x(t3 + r), y(t3 + τ)) and then to P4 = (x{U + τ), y(t^ + τ)) along the line connecting P2 to (1, — 1) and then the line connecting P3 to (1,1), with t^ and t4 being the next zeros of x(t) — σ and y(t) — a, respectively. The periodic orbit is obtained by choosing (φ(ϋ), ψ(0)) so that Po coincides with P4. The existence of such (^(0), ψ(0)) is guaranteed by the existence of a fixed point to a 1-dimensional increasing map. Figures 5.5.8 and 5.5.9 show that a typical trajectory spirals towards
5.5 A network of two neurons: McCulloch-Pitts nonlinearity
137
the above limit cycle from outside or from inside.
Figure 5.5.8. A typical trajectory of (5.5.25) with |σ| < 1, where the initial value {φ, ψ ) τ € is chosen so that > λο· The trajectory spirals towards the periodic orbit along broken lines: from PQ = (
Note that the existence of a limit cycle is independent of the size of r provided |σ I < 1. The next result shows that increasing \σ | is the key to prevent the occurrence of a limit cycle. Theorem 5.5.7. (i) Let \σ\ > 1 and Φ = (φ,ψ)τ € Χσ. Then, as t 00, Φ Φ τ φ T T (χ (ί),γ (ί)) ( 1 , - 1 f if a > 1, and (χ (t), y* (t)) ( - 1 , l) if σ < —I; (ii) Leta = 1. Then, as t 00, (χφ(t), y*(t))T -> ( 1 , - l ) r if either Φ e X+'+ U X~'+ U X~- or Φ € Xp- and
138
5 Signal transmission delays
y
Figure 5.5.9. A typical trajectory of (5.5.25) with |σ| < 1, where the initial value (ιφ, ψ ) ) τ 6 X a ' + satisfies < λο- The trajectory spirals towards the periodic orbit along broken lines in the same way as in Figure 5.5.8, but from inside the periodic orbit.
5.6 Delay-induced transient oscillations Numerical investigations indicate that even when precautions are taken to avoid delayinduced instabilities in the Hopfield's network of neurons, oscillations may arise in the presence of delays. These are delay-induced transient oscillations (DITOs), and cannot be predicted by the analysis of the asymptotic dynamics of neural networks. Examples of DITOs were first reported in two mutually exciting neurons and in rings by Babcock and Westervelt [ 1987] and Baldi and Attyia [1994]. In Pakdaman, GrottaRagazzo, Malta and Vibert [ 1997a] and Pakdaman, Grotta-Ragazzo and Malta [ 1998a], it was observed that when DITOs appear, the duration of the transients, that is, the time required for the system to stabilize at an equilibrium, increases as the characteristic charge-discharge time of the neurons tends to zero, or, equivalently, as the delay is increased to infinity. In fact, DITOs can last so long that for practical purposes they are indistinguishable from sustained oscillations. Therefore, understanding the origin of DITOs in neural networks constitutes an important complement to the analysis of delay induced instabilities in order to avoid long lasting oscillations that deteriorate network performance.
5.6 Delay-induced transient oscillations
139
It seems that one can attribute the presence of DITOs to the fact that solutions of (5.2.1) transiently follow those of the difference system defined by
System (5.6.1) can display attracting oscillations even when almost all solutions of the corresponding delay equation (5.2.1) are convergent. The fact that some solutions display transient oscillations before stabilizing at equilibrium is the result of the difference between the asymptotic dynamics of the continuous dynamical system (5.4.1) and the discrete network (5.6.1). The main purpose of this section is to confirm the above observation through the analytical calculation of the transient regime duration (TRD) for the following twoneuron network x(t)
= -x(t)
+ wf(y(t
- τ)),
y(t)
= -y(t)
+ wf(x(t
- τ)),
(5.6.2)
where / is the step function, i.e., f(x) = 1 if χ > 0 and —1 otherwise. Following Pakdaman, Grotta-Ragazzo and Malta [ 1998a], we show that the dynamics of system (5.6.2) can be understood in terms of the iterations of a one-dimensional map. We first provide a geometrical description of the construction of this map, and then state the general results. Assuming that the delay r is set to zero, system (5.6.2) defines a two-dimensional ordinary differential equation, the trajectories of which in the (JC, plane can be easily constructed. We denote by r\ = — (w, w), ri -- (0, 0), = (w, w), r[ = (—w, w), and rj = (w, —w). r\ and rj are the stable equilibria of the system to which most trajectories converge. Trajectories with initial conditions (w, ν) such that u > 0 and ν > 0 are the segments of the straight lines connecting the initial point to r3. An example of such a trajectory in the two-dimensional phase space is shown in Figure 5.6.1. For an initial condition (u, ν) with 0 < — u < v, the trajectory coincides with the straight line connecting the initial point to r3, until reaching the y-axis, that is, as long as x(t) < 0. From this point on, the trajectory is the segment connecting the intersection point with the y-axis to the equilibrium point r3. Thus, such trajectories are broken lines. One example is also shown in Figure 5.6.1. The trajectories of other initial conditions can be constructed in the same way.
140
5 Signal transmission delays
Figure 5.6.1. Two typical trajectories of the network (5.6.2) with delay τ = 0.
In the following, we will mainly concentrate on the influence of delays on the trajectories of initial conditions which are constant mappings with constant values (u, ν) satisfying either u > 0 and ν > 0 or 0 < — u < v. These two regions of initial conditions are referred to as regions I and II, respectively. Trajectories of other constant initial conditions can be derived from these through symmetry considerations. Since the input-output function of the neurons is a step function, delays do not alter the trajectories with initial conditions in I. These are the same as for the system in the absence of delay, that is, they are segments connecting the initial condition to Γ3, as exemplified in Figure 5.6.2. The situation is different for initial conditions in II. In the case without delay, the effect of the sign change in χ appeared instantaneously, as the trajectory switched to the straight line connecting r3 right upon crossing the y-axis. When interunit signal transmission is not instantaneous, this effect is delayed, that is, the trajectory continues along the same straight line towards the point r'3, during a time interval equal to the delay r, before switching to the segment that takes it towards the point For all trajectories starting in region II, the break points have exactly the same abscissae. Thus, there is a critical value of v, denoted by v\, such that the break point is exactly on the x-axis. It is easy to verify that υι = w(eT — 1). This point plays an important role, as all initial conditions in II whose trajectories cross the _y-axis above vi reach the stable point ri after exactly one break point, such as the one shown in Figure 5.6.2.
5.6 Delay-induced transient oscillations
141
Figure 5.6.2. Some typical trajectories of the network (5.6.2) with τ > 0: a trajectory starting in region I is the segment connecting the initial condition to r3; a trajectory starting from region II moves towards r3 first. If the intersection of the trajectory with the y-axis is above v\ then the trajectory reaches r^ after one break point.
Trajectories of other initial conditions will display two or more break points before they enter the region where y < 0, that is, at some point, the second variable changes sign. This sign change will affect the trajectory after a time delay, and provoke a second break point, where the trajectory switches to the straight line leading towards r\. Other break points follow this, so that eventually the broken line representing the trajectory will end up at the equilibrium point An example of a trajectory with three break points is shown in Figure 5.6.3. The analysis hinges upon the construction of a map that describes the trajectories after the second break point. Let us first note that the break point and the trajectory with an initial condition in II are entirely determined by the ordinate ν at which the trajectory first intersects the y-axis. So in the following we only take this value into consideration. Then, for a given value ν the construction of the map relies on the observation that we can find a point on the Jt-axis, denoted by g(v) in Figure 5.6.3, whose trajectory coincides exactly with that of the original initial condition after the second break point. Geometrically, this point is easily obtained as the intersection point between the jc-axis and the straight line connecting r\ to the second break point of the trajectory. Thanks to this simple
142
5 Signal transmission delays
Figure 5.6.3. A trajectory with 3 break points.
geometrical construction and after elementary calculations about the values of χ (τ +t*) and y(r + 1 * ) where t* is the first time where the trajectory intersects the y-axis, we can obtain g(v) analytically as g(v
) =
w
(2 +
e
~T)v
2 w — (v + w)e~T
(5 6
3)
Once the point (g(v), 0) on the jc-axis is determined, we can reiterate the same analysis and construction assuming that the initial condition is in fact this new point. Thus, we can easily see that if g(v) > vi, the trajectory will have only one more break point before tending to the equilibrium r j along a straight line. This situation is represented in Figure 5.6.3. Conversely, when g(v) < v\, the trajectory will have at least two more breaking points, since it will cross the y-axis again. For such points, we can construct a point (0, s) on the y-axis such that the trajectory after the second break point, i.e., the fourth break point along the trajectory starting from the initial condition, coincides with that of (0, s). Thanks to the symmetries of the system, we have s = g(g(v)) = g2(v). If g2(v) > vi, we stop the process because after the fifth break point, the trajectory will converge to the stable equilibrium point along a straight line, otherwise, we continue to iterate g until there is η such that gn(v) > See Figure 5.6.4. Such a value η exists because g(v) > ν for all υ > 0. Figure 5.6.5 shows an example of the function g and the cobweb construction of the sequence of points g" (ν), until it becomes larger than v\.
5.6 Delay-induced transient oscillations
Figure 5.6.4. The sign change of y for a trajectory starting in region Π provokes a second break point where the trajectory switches to the straight line leading to r\.
Figure 5.6.5. An example of the iteration of the map g. The curve is the graph of the function g, the horizontal line is at v\ and the straight line is the diagonal. The cobweb construction shows the successive iterates g(v) until they become larger than ν ι.
144
5 Signal transmission delays
We can formalize the above construction as follows: Theorem 5.6.1. For r = (u, υ) e R 2 such that ν > —u > 0, let V(r) = w(v + u)/(w — u) and η be the integer such that < v \ < gn(V(r))y where t^ = w(eT — 1). Then, there exist Τ > ητ and θ > 0 such that for t > T, z(t, r) = z(t — Θ, rn), where z(t, ξ) is the solution of (5.6.2) through the initial condition ξ (ξ — r or rn), and rn represents the point (0, gn( V(r))) on the y-axis for η even, and the point (gn(V(r)), 0) on the x-axis for η odd. Our interest is in the transient oscillations of the solutions, and the incidence of these on the TRD, that is, the time it takes the trajectory to reach a small neighborhood of the equilibrium point. Using g and its iterates we can characterize both aspects of the dynamics of the system. Oscillations for a system like (5.6.2) can be defined as points where either one of the variables χ or y take the value zero, in other words, a solution oscillates as long as there are times t such that x(t)y(t) = 0. We refer to such points as the zeros of the solution, and we denote by N(r, r) the number of zeros of the solution of (5.6.2) with delay r , going through the initial condition r. The larger the value of N, the more the solution oscillates. The following result shows that solutions of initial conditions r — (u, v) in region II close to the straight line u + υ = 0 can display arbitrarily large numbers of zeros: Theorem 5.6.2. Let vn = g~n(vi), then Vn
~ 2(2 + e~x)n
2w(2-e~T)n(eT - 1) x n + [(2 + e~ ) - (2 - e~T)n](er
- 1)'
We have νι(τ) > ι>2(τ) > · · · > υ^(τ) > · · · > 0, and vn tends to zero as η —• oo. For r = (u, v) e R 2 , with 0 < — u < v, we have N(r, τ) = 1, 2 ρ andlp + 1 (ρ > 1) for υ > ι>ι — (1 + v\/w)u, ν = vp — (1 + vp/w)u and vp+\ — [1 + (vp+i)/w]u < ν < vp — (1 + vp/w)u, respectively. Recall that the trajectory through (0, υ) is a broken line, and it converges to a stable equilibrium along a straight line after several break points. We call the time required for the trajectory to reach the starting point of the final straight line along which the trajectory converges to an equilibrium the transient region duration(TKD). The number of zeros of a solution informs us about the TRD, because successive zeros cannot be arbitrarily close to one another, and as long as the solution is oscillating, it is obviously not stabilizing at the equilibrium point. Thus, any measure of the length of transient oscillations provides a lower bound for the TRD. We have the following result: Theorem 5.6.3. We denote by T(r, x ) the transient regime duration of a solution of (5.6.2) with delay τ going through the initial condition r = (u, ν) e R 2 with 0 < —u < v. Then, T(r, τ) > ρτ for vp+\ — [1 + (vp+\)/w]u < ν < vp — (1 + vp/w)u.
5.6 Delay-induced transient oscillations
145
This result indicates that for fixed delay r, the TRD grows indefinitely as initial conditions get closer to the line u + ν = 0. In Pakdaman, Grotta-Ragazzo and Vibert [1998a], a numerical calculation is carried out that shows the TRD, as a function of the ordinate ν for initial conditions (—10 -3 , υ), increases abruptly as ν approaches ΙΟ" 3 . All the previous analyses were carried out for a system with fixed delay values. Our main concern is to see how the results are modified when the delay is increased. This is done in the following theorem which stems from the fact that the values νη(τ) of the bounds delimiting the regions with a given number of zeros, and hence with a given range of TRD, increase exponentially with the delay. Theorem 5.6.4. For a fixed initial condition r = (u,v) with 0 < —u < v, there is a strictly increasing sequence 0 < τ\ < T2 < · • · < Xk < · · ·, such that z(t, r), the solution of (5.6.2) with delay τ going through r, displays exactly 1, 2p, or 2p + 1 zero(s) for τ < τ\, τ = τρ, or τρ < τ < τρ+ι, respectively. In other words, N(r, τ) = 1, 2ρ, and 2ρ + I for τ < τ\, τ = τρ, and τρ < τ < τρ+\, respectively. Thus, for fixed r, Ν(r, τ) and consequently T(r, r) are increasing functions of r, when r is large enough. The speed with which these quantities increase can then be determined by remarking that ν„(τ) ^ 2weT/(n + 2) as r — o o , and by using the lower bound on the TRD provided above. We have the following result on exponential increase of transient regime duration. Theorem 5.6.5. Let r = (u, v) e R 2 with 0 < — u < v, and let ρ = [2we T /{w + V)] — 1 where the brackets indicate the integer part, and V = w(u + v)/(w — u). Then, we have T(r, τ) > ρτ or T(r, τ) > 2wreT/(w + V) as τ οο. Numerical simulations carried out in Pakdaman, Grotta-Ragazzo and Malta [1998a] confirmed that the above lower bound for TRD is in good agreement with the true value of TRD. All the analysis and results presented until this point were carried out for a steplike input-output function g. This assumption allowed us to carry out a detailed analytical description of the system dynamics. Numerical investigations show that similar results hold for the system J i(t) = -x(t) + wg(y(t - r)), Ρ ί y(t) = -y(t) + wg(x(t - τ)), ·°· ; where the signal functions are g(x) = tanh(ax) with aw > 1 (Pakdaman, GrottaRagazzo and Malta [1998a]) and for an excitatory ring (Pakdaman, Malta, GrottaRagazzo, Arino and Vibert [1997c]).
146
5 Signal transmission delays
5.7 Effect of delay on the basin of attraction We now consider the following system of delay differential equations = -yi*i(0 xi(t) = -yix2(0
+ /l + U)if(x2(t - τι)),
(5.7.1)
+ h + W2f(x\(t - τ 2 ))
with (5.7.2) We note that all results presented here can be extended to general smooth signal functions, but to describe some numerical simulations it is more convenient to specify the signal function involved. As illustrated before, many networks are designed so that stable equilibrium points represent stored information and transmission delay may render some equilibria unstable, thus deteriorating information retrieval. Much progress has been made to derive sufficient conditions on network parameters such as connection weights, network connection topology and signal delays that ensure that the behaviors of networks with or without delay are similar. In particular, in Sections 5.3 and 5.2, we provide two types of conditions: (i) Constraints that ensure that the delay does not induce the loss of information stored in the stable equilibrium points. This is achieved when the system with delay has exactly the same stable equilibria as the one without delay, that is, the delay does not alter the local stability of the stable equilibrium points. (ii) Constraints that ensure that the delay does not induce spurious stable undamped oscillations. This is achieved when both the system without delay and the one implementing delay are (almost) convergent, that is, the delay does not alter the global stability of the system. Constraints (i) and (ii) avoid delay induced changes in the asymptotic behaviors of the system. However, even under such constraints, the delay may influence important features in the system's dynamics. For instance, changing the delay can alter the boundary of the basins of attraction of the stable equilibrium points. This can be of prime importance in associative memory networks in which the position of the basin boundaries determines which information is retrieved for a given initial condition. In this section, we are going to introduce the work of Pakdaman, Grotta-Ragazzo, Malta, Arino and Vibert [1998b] for the network (5.7.1) to show that constraints (i) and (ii) may not be sufficient to preserve the shape of the basin boundaries. Namely, we consider the dynamics of a network of two neurons connected through delayed excitatory connections. This system has been chosen because it is simple enough to allow thorough theoretical and numerical analyses, while it retains essential features of large networks. Pakdaman et al. showed that this two neuron network satisfies both of the conditions stated above, that is, the delay does not affect the
5.7 Effect of delay on the basin of attraction
147
local stability of its stable equilibria, and, no matter what value the delays take, the two-neuron network remains almost convergent. Yet, they showed that the boundary separating the basins of attraction of two stable equilibria depends on the delays. Clearly, we can choose the phase space X = C([—τ2, 0]; Μ) χ C([—τι, 0]; R) equipped with the supremum norm. For an initial condition φ = (φι, 02) e X, there exists a unique solution of (5.7.1), denoted by z(t, φ) = (*i (t, φ), χι(t, φ)), such that χι(θ, φ) = φχ(θ) for - τ 2 < θ < 0, χ2(θ, φ) = 02(0) for - τ ι < θ < 0 and z(t, 0) satisfies (5.7.1) for t > 0. Also for such a solution of (5.7.1), we denote by = ((*i)f(0), (*2),(0)) the element of X defined by (x\)t(&) = xx(t + θ) for Ζί(φ) θ e [ - τ 2 , 0 ] and (xi)t(ß) — x2(t + Θ) for θ e [ - τ ι , 0]. We then obtain a semiflow {zr(^)}f>o on X. In spite of the differences between (5.7.1) and their related ODEs mentioned above, there are cases in which their dynamics are, in some sense, very similar. For (5.7.1), this happens when w\W2 > 0 (positive feedback condition). In the rest of this section, we will assume that both connection weights are strictly positive, that is, wi > 0 and w 2 > 0. 7 For 0 = (0i, 02) and ψ = (ψ\, fa) in X, we say that φ > ψ (resp. φ > ψ) if 01 (0') > Ψι(θ') (resp. 0ι(0') > ψλ(θ')) for all Θ' e [ - τ 2 , 0 ] and 0 2 ( 0 ) > ψ2iß) (resp. 02(0) > ψ2(θ)) for all θ e [ - τ ι , 0]. Note that if φ > xj/, the set defined by {χ e X; 0 > χ > ψ} is an open subset of X. The positive feedback condition and Theorem 2.5 in Smith [1987] (see also discussions in Section 5.2) imply that (5.7.1) generates an eventually strongly monotone semiflow, that is, if 0 > φ' and φ φ φ' then z,(0) > Ζί(φ') for t > 2max{Tj, τ2}. Throughout this section, constant functions in X are identified with the value they take in M2. Note that z{t) = C*i (t), X2(t)) taking the value (a, b), that is, x\(t') = a for t' > — X2 and X2(t) = b for t > —x\, is a solution of (5.7.1) if and only if (a, b) satisfies the system: ί -για + Ii + wif(b) = 0, [ - n b + I 2 + w 2 f ( a ) = 0. The number and the value of the solutions of system (5.7.3) depend on the values of the parameters (γι, γ2, wi, W2, h , h ) · Generally, the parameter set can be separated into two regions, one in which the system has a unique solution and another in which it has three solutions. These solutions do not depend on the delays τ\ and Τ2· Geometrically, they are the intersection points between the curves —για + Ιι + w\f(b) = 0 and — Yib + h + w 2 / ( a ) = 0 in the (a, ft)-plane. An example of three equilibria is shown in Figure 5.7.1, and another example of one equilibrium is shown in Figure 5.7.2. 7
We point out that if u>i < 0 and u>2 < 0 then a simple change of variables y\ = x\ and y2 = — leads to a new network equation for (_vi, >2) for which the connection weights are strictly positive.
5 Signal transmission delays
Figure 5.7.1. The case of three equilibrium points of the two-neuron system. The equilibrium points of (5.7.1) are the intersection points between the curves Γι : -na + /ι + W1 f(b) = 0 and Γ 2 : ~nb + h + f(a) = 0.
b
Figure 5.7.2. The case of one equilibrium point of the two-neuron system. The equilibrium point of (5.7.1) is the intersection point between the curves Γι : -γιa + Ii + wif{b) = 0 and Γ 2 : + I2 + f{a) = 0.
149
5.7 Effect of delay on the basin of attraction
In the rest of this section we will restrict our attention to the case where system (5.7.3) has three distinct solutions. The constant functions associated with these solutions constitute the equilibria of (5.7.1) and will be denoted by r\, Γ2 and r^, so that, when considered as constant functions in X, they are ordered as r\ < r2 < rj. The eventually strong monotonicity property and the results in Smith [1987] imply that the asymptotic dynamics of (5.7.1) and its related ODE (obtained by setting Π = T2 = 0) is essentially the same in the following sense: (PI) The equilibrium n , k = 1,2, 3, of (5.7.1) is locally asymptotically stable if and only if the same is true of the related ODE; (P2) The union of the basins of attraction of the stable equilibria of (5.7.1) is an open and dense set in the phase space X, the same being true for the corresponding ODE. Recall that the basin of attraction of a stable equilibrium is the set of initial conditions in X whose trajectories converge to the equilibrium point. From the above results it can be deduced that r\ and ri are locally asymptotically stable equilibrium points, ri is unstable, and the union of the basins of attraction of r\ and r3 is an open and dense set in X. So that neither (5.7.1) nor its related ODE has stable undamped oscillatory solutions. Our focus in this section is the effect of delay on the boundary Β separating the basin of attraction of r\ from that of r3. Any neighborhood of a point in Β intersects the basins of attraction of both ri and rj. The following description of the boundary separating the two basins of attraction provides the basis for a method of numerical approximations. Theorem 5.7.1. Let u be a given element in X such that u > 0. There exists a continuous, strictly decreasing map8 bu : X —> Μ such that (i) For all φ G Χ, φ + ί>«(0) u is the unique intersection between the line going through φ and directed by u (i.e., the set {φ + Xu , λ e with the boundary separating the two basins of attraction. (ii) The set {φ € X; bu (φ) > 0} is exactly the basin of attraction
ofr\.
(iii) The set {φ e X; bu (φ) < 0} is exactly the basin of attraction ofr3. (iv) The set {φ € X; bu (φ) — 0} is exactly the boundary separating the two basins of attraction. Proof. We first prove the following claim: The solution going through an initial condition smaller (resp. larger) than Γ2 and different from Γ2 converges to r\ (resp. r?,), that is, for φ Ε X, if φ < ri (resp. φ > rj) and φ φ Γ2 then ζΛΦ) converges to r\ (resp. r$) as t —»• 00 (see Figure 5.7.3J. 8
We say that bu : X —y R is strictly decreasing if φ > φ' and φ φ φ' in Χ implies bu (φ) < bu (φ')
150
5 Signal transmission delays
Figure 5.7.3. The convergence of a typical trajectory starting from a point below r-i or above Γ2 in the order relation of X.
To prove the claim, let φ e X be such that φ
5.7 Effect of delay on the basin of attraction
151
r\ and 7-3, which contradicts property (P2). So, b'(φ) = b"(φ) and we denote this value by bu(φ). As the basins of attraction of the two stable equilibria r\ and are open sets, φ + bu(4>)u does not belong to either of the basins and is necessarily in the boundary. The characterization of the basins and the boundary described in the theorem are derived from the construction of bu. The fact that the map bu from X to R is continuous and strictly decreasing stems from the continuous dependence of the solutions of (5.7.1) on initial conditions, and from the monotonicity of the semiflow. This completes the proof. Corollary 5.7.2. Let φ belong to the boundary B, then Ztü0 converges to r\ (resp. r$) as t —> 00 for all ψ < φ (resp. ψ > φ) such that φ φ φ. Proof. Let φ be in the boundary. For φ e X such that φ < φ and φ φ φ, we have ζτ(Φ) < ζτ(Φ) for Τ sufficiently large. Let u be defined by u = ζτ(Φ) — ζτ(Φ) > 0. Then we have ζτ(Φ) = ζτ(Φ) + u. As ζτ(Φ) belongs to the boundary we have buizriif)) = 1 > 0, so ζτ(Φ), and therefore, φ belongs to the basin of attraction of r\. In a similar way, it can be shown that Ζί(φ) converges to r3 for φ > φ and φ φ φ. This completes the proof. From Theorem 5.7.1, we can deduce that the boundary has a regular structure. Indeed, let μ > 0 be in X, and Η be a hyperplane supplementary to u. Therefore, for all φ € X, we can write in a unique way φ = h+ ku, where h e Η and λ e R. ku is the projection of φ onto the line Ru along the direction H. We denote by pu (φ) — —λ, the unique real number such that φ + pu (0)w belongs to Η. Let φ belong to the boundary Β, we can write φ — h — pu{4>)u. From the definition of bu, we know that bu(h) is the unique real number such that h + bu(h)u belongs to B, so that we have necessarily ρΗ{φ) = —bu(h). From this we can deduce that the boundary is homeomorphic to the linear hyperplane Η. Corollary 5.7.3. The map φ φ + pu($)u from Β to Η is a homeomorphism with inverse: h —> h + bu (h)u from Η to Β. Corollary 5.7.4. The boundary Β is the graph of a map from the hyperplane Η to the line Rm. Proof. As the map (h, ku) —»• h + ku from Η χ Rm to X is an isomorphism, we identify these two spaces. From Corollary 5.7.3, we obtain that the boundary Β is the subset of Η χ Rm defined by {(h, bu(h)u)·, h e H}. In other words, Β is the graph of the map L from Η to Rm defined by L(h) — bu(h)u. The basin boundary can also be characterized in terms of the dynamics of the solutions.
152
5 Signal transmission delays
Theorem 5.7.5. The boundary Β between the basins of attraction of the two locally stable equilibria r\ and r3 of (5.7.1) is constituted by the initial value φ such that the solution Zt (Φ) either converges to r2 or is asymptotically periodic. A sketch of the above theorem is presented in Pakdaman, Grotta-Ragazzo, Malta, Arino and Vibert [1998b], see also the discussions in the next section. The above results show that the space X is partitioned into three disjoint subsets that are positively invariant by the semiflow generated by (5.7.1): the two basins of attraction of the stable equilibrium points and the boundary separating them. In terms of the temporal evolution of solutions of (5.7.1), this means that there are only three types of asymptotical behaviors: solutions tend to r\ or or oscillate around r^. Solutions oscillating indefinitely around r j are not likely to be observed as they are unstable. Nevertheless, the presence of these oscillating solutions, as will be shown in later sections, is important for describing the global attractor and for describing transient oscillations which can be easily obtained in numerical investigations. Theorem 5.7.1 provides a method to locate numerically some points on the boundary for a given initial condition. A given initial condition can be shifted "up" or "down", along a "positive" direction, until it crosses the boundary (Figure 5.7.4). However, φ+ bu(
Ψ
Figure 5.7.4. Schematic representation of the map bu in a plane, where r\, and r3 are the equilibria and u is a strictly positive element in X. The decreasing curve is the basin boundary. All points below the boundary are in the basin of attraction of r\, whereas all points above the boundary are in the basin of attraction of r$. A given initial point has to be translated in the same direction as u to reach the boundary if it is in the basin of attraction of r\, and in the opposite direction if it is in the basin of attraction of r3. The amount by which the initial point has to be translated to reach the boundary gives the value of bu.
carrying such a task requires extensive computations because solutions close to the boundary tend to have oscillatory transients (Babcock and Westervelt [1987]) whose duration increases exponentially with the delay (Pakdaman, Grotta-Ragazzo, Malta and Vibert [1997c]), as shown in the last section.
153
5.7 Effect of delay on the basin of attraction
In what follows, we present some discussions about comparison of basin boundaries. One method to compare the boundaries separating the basins of attraction of the equilibria r\ and 7-3 of (5.7.1) for different values of τ\ and T2 is to consider a restricted set of initial conditions such that there exists a one-to-one correspondence between initial conditions for different values of τι and Τ2· One method for doing this is to consider constant functions as initial conditions. In the following, we will restrict our attention to this class. For a given real number c\, let β (ci) be the unique real number such that (c\, ß(c\ )) is on the basin boundary. From the characterization of the basin boundary described above, we know that the function β is a decreasing continuous function defined on the real line. The graph of β divides the (ci, C2)-plane into two regions. Points "below" this graph correspond exactly to the constant initial conditions lying in the basin of attraction of r\, and those "above" the graph to the ones belonging to the basin of attraction of r3. Thus Bc, the graph of β in the (c\, C2)-plane, can be considered as the boundary separating the basins of attraction of the two stable equilibrium points. To illustrate the effect of the delays on Bc, we consider identical neurons connected with either symmetrical weights or symmetrical delays. In both cases, we set I\ = — ^ and I2 = ψ . For these parameters, the coordinates of the equilibria r\, V2 and η are Π
=
(-a,
-b),
r2
= ( x \ u , xi«) = (0, 0),
r3 =
(a,
b),
where (a, b) is the strictly positive solution of (5.7.3). Note that in this case, the system is invariant under the change (jq, X2) — ( ~ x \ , —xi). This implies that Bc is also symmetric under the same transformation. Symmetrical weights. For identical neurons, receiving the same inputs and connected through symmetrical weights and delays, the neurons are indistinguishable from each other. Therefore, the basin boundary Bc is the straight line defined by c\ -
x\u +C2-
X2u =
0.
Hence, in this situation, varying both delays together does not bring any change to the basin boundary. However, when the delays are no longer equal, Bc undergoes changes depending on the values of the delays. Numerical simulations in Pakdaman, Grotta-Ragazzo, Malta, Arino and Vibert [1998b] show that when T2 decreases from τι to 0, the slope of the boundary at each point increases from — 1 to a strictly negative limit value which depends on ri. Symmetrical delays. For neurons connected through symmetrical delays but nonsymmetrical weights, changes in the delays may modify the graph Bc. Again, numerical simulations in Pakdaman, Grotta-Ragazzo, Malta, Arino and Vibert [1998b] show that the boundary Bc changes for nonsymmetric weights, when the delay is changed. In particular, the slope of the boundary decreases as the delays ri = %2 are increased from 0 to 2, except at the unstable point V2.
154
5 Signal transmission delays
The two-neuron network studied here is representative for irreducible cooperative networks, since the theoretical description of the basin boundary, and more generally, the global picture of the phase portrait derived for the two-neuron network can be extended to irreducible cooperative networks of arbitrary size (Pakdaman, Malta, Grotta-Ragazzo and Vibert [1996]).
5.8
Synchronized activities
Assuming the network considered in (5.7.1) consists of two identical neurons without external inputs, we obtain the following: *l(0
-
-μχι(ί)
+ f(X2(t
-
τ)),
X2(0
=
-μΧ2(0
+ f(xi(t
-
τ)).
(5.8.1)
In what follows, we will assume that / : R —»• R is C2-smooth with / ( 0 ) = 0, l i m ^ ± 0 0 f{x) = / ± 0 0 e R , / ' ( 0 ) > μ and / ' ( * ) > Oforall* € R, a n d / " ( * ) * < 0 for Λ: Φ 0. It is easy to see that (5.8.1) has three equilibria ( ξ _ , £_), ( 0 , 0 ) and (£+, £+) with ξ~ and ξ+ denoting the negative and positive zeros of (5.8.2)
μξ = / ( ξ ) ,
and ξ± denoting the constant mapping on [—1,0] with the constant value also that / ' ( f ± ) < μ. See Figure 5.8.1.
/(*)
ξ
Figure 5.8.1. The graphs of y = / ( ξ ) and y = ξ intersect at (ξ~,ξ~), and ( $ + , £ + ) with /'(ξ±)
< μ.
(0,0)
Note
5.8 Synchronized activities
155
Then results in Section 5.7 shows that almost every trajectory is convergent to (ξ-, £_) or (£+, More precisely, there exists a closed subspace Η of codimension 1 in the phase space C([—r, 0]; R 2 ) and a homeomorphism g : Η -> g(H) = Β c C([—r, 0]; R 2 ) such that the graph Β = g(H) is exactly the boundary separating the basins of attraction of (£_, £_) and (£+, £+). Moreover, this boundary consists of the initial value (φ\,φι) e C([—r, 0]; R 2 ) whose trajectories converge either to (0,0) or to a periodic orbit. In this section, we describe the detailed dynamics of the semiflow of (5.8.1) restricted to the boundary Β and depict the geometric and topological structures of Β. Note also that the above discussions show that, due to the nature of identical decay rate μ and synaptic delay r of the two neurons, almost every trajectory is eventually synchronized (convergent to the synchronous equilibria (£_, £_) or (£+, £+)X here a solution (JCI, X2) of (5.8.1) is synchronous if χχ = χι on the common domains of definition. Clearly, if we let ζ denote the common value of xi = X2, then a synchronous solution is completely characterized by the scalar delay differential equation z(t) = -μζ(ί)
+ f(z(t - τ)).
(5.8.3)
If we further normalize the delay τ by x(t) = z(rt) then we get an equivalent system x(t) = -τμχ(ί)
+ zf(x(t
- 1)).
Renaming the parameter and the signal function, we arrive at x(0 = -μχ(0
+ f(x(t - 1))
(5.8.4)
with parameter μ > 0 and a C2-smooth bounded nonlinearity / : Μ —»· Μ satisfying all conditions stated in the above discussion. We now summarize the results from the monograph of Krisztin, Walther and Wu [1999] and explain the major steps towards these results. Every element φ e C([—1,0]; M) determines a solution χφ : [—1, oo) —>· R of equation (5.8.4). The relations F(t,
define a continuous semiflow F : R + χ C —> C. Using the associated variation-ofconstants formula jc(f) = β-μ'χ(0)
+ f e-^-s)f(x(s Jo
- 1)) ds,
one can easily show that all maps F(t, •) with t > 0 are injective and continuously differentiable and F is monotone with respect to the pointwise ordering < on C. The positively invariant set S = {φ eC·, (χφ)~Ηθ) is unbounded}
156
5 Signal transmission delays
separates the domain of absorption into the interior of the positively invariant cone Κ = {φ e C ; 0 ( j ) > 0 for all 5 € [ - 1 , 0 ] } from the domain of absorption into the interior of —AT. Such a separatrix has the following nonordering property: if φ, ψ e C and φ < ψ then either φ e C \ S or ψ e C \ S. Or equivalently, S does not contain two distinct elements which are ordered. In order to derive a graph representation of 5, we linearize equation (5.8.4) at the equilibrium 0. The derivatives {D2F(t, 0)}f>o form a strongly continuous semigroup and satisfy D2F{t,m
=
yt,
with the solution y^ : [—1, oo) —• R of the linearized equation y{t) = - ß y ( t ) + f'(0)y(t
- 1)
(5.8.5)
given by — <j>. As discussed in Section 5.4, the spectrum σ of the generator of the semigroup consists of simple eigenvalues which coincide with the zeros of the characteristic function ϋΒλ\->λ
+ μ — /'(0)e~k
G C.
There is one real eigenvalue λο· The other eigenvalues form a sequence of complex conjugate pairs (kj, λ7 ) with Reλ 7 + ι < Rely < λο
and
(2j — \)π < ImXj < 2 j n
for all integers j > 1, and Re kj — oo as j —• oo. The number of eigenvalues in the open right halfplane depends on μ and /'(Ο). Any odd integer 2j + 1, j € N, can be achieved. Let P, L, and Q denote the realified generalized eigenspaces of the generator associated with the spectral sets {λο}, {λχ, λι}, and σ \ {λο, λι, λι}, respectively. Then C = β φ Ζ , φ P. It is easy to see that Ρ = Μχ 0 with χ ο (0) = ek°9 for θ e [ - 1 , 0], We assume that /'(Ο) > μ/cos6ß,
where θμ e (t",27t) and θμ = -μίαηθμ.
(5.8.6)
As shown in Section 5.4, this is equivalent to Ο^βλι.
(5.8.7)
Then the unstable space of the generator of the semigroup {D2F(t, 0)},>o is at least 3-dimensional. In Krisztin, Walther and Wu [1999], we proved the following graph representation theorem.
5.8 Synchronized activities
157
Theorem 5.8.1. There exists a map Sep : Q φ L ->· Ρ such that S = {χ + Sep(x); χ € Q φ L) and || SepOc) - Sep(x)|| < β λ °||χ - χ||
/or χ, χ e β φ L .
For each φ € C, let φρ, φ®, φL denote the projections of φ onto P, Q, L, respectively. If φ e C \ S, then φρ Φ Sep(
as t —> oo, and if φ is below S
A local approximation of S near zero is possible by the linearization (5.8.5). To be more precise, we recall that there exists a 3-dimensional C' -submanifold Wioc of C which is tangent at 0 to the realified eigenspace L Θ Ρ of the spectral set {λο, λι, λ ι} and has the property that for every φ e Wioc there exist a solution x : R —> R and i e l such that xq = Φ and xs e W\oc for all s < Τ. The forward extension W = F(R+ χ Wioc) of Wioc is an invariant subset of C. In the following, for a given Banach space E, we use E e to denote the open ball in Ε centered at the zero and of radius ε > 0. Then we have the following local approximation theorem for Wioc Π S. Theorem 5.8.3. There exist η > 0 and ε > 0 and a Lipschitz continuous map Sep^ : Q Φ Ρ such that Sep„(0) = 0, (Wioc Π S ) η
S e p ^ L ^ c Q@P E , ( 0
+
Lr,
+
Ρ
ε
) =
{ χ +
Sep „(χ); χ
€
L„}.
We refer to Figure 5.8.2 for illustration. Recall that W c C is invariant. Later we will see there exists a close relation between W and the global attractor of (5.8.4), and thus it is important to describe the structure of W. The following is the main result in Krisztin, Walther and Wu [1999]. Theorem 5.8.4. W is invariant, and the semiflow defines a continuous flow Fw '• R χ W —> W. For W and for the part W H S of the separatrix S in W, we have the following graph representations (see Figure 5.8.3 for illustration):
158
5 Signal transmission delays Q
Ε
Gι
(codim 0 = 3)
(codim Ε = 3)
(dimGj = 1)
Figure 5.8.3. Graph representations of W and W f l 5 .
5 . 8 Synchronized activities
159
There exist subspaces G2 C G3 of C of dimensions 2 and 3, respectively, a complementary space Gi 0/G2 in G3, a closed complementary space Ε of Gz in C, a compact set Ds C G2 and a compact set Dw C G3, and continuous mappings w : Dw —> Ε and ws : Ds G\® Ε such that9 W = {χ + w(x); χ e Dw},
W Π 5 = {χ + ws(x)·, χ e Ds}.
We have Dw = dDwU Dw,
W = {X + w(x); χ
eDw},
ο
and the restriction of w to Dw is C1 -smooth. The restriction of Fw to R χ W is C1 -smooth. The domain Ds is homeomorphic to the closed unit disk in ]R2, and consists of the trace of a simple closed C 1 -curve and its interior. The map ws is C1 -smooth in the sense that ws | ° is Ds Cl-smooth, and for each χ Ε d Ds there is an open neighborhood Ν of χ in G2 so that ws \nhDs extends to a C1-map on N. C o n c e r n i n g t h e d y n a m i c s , w e h a v e that
the set 0
= wns\(wns)
=
{x +
«;s(x);/
e
dDs}
is a periodic orbit, and there is no other periodic orbit in W. The open annulus (W il S) \ {0} consists of heteroclinic connections from 0 to Θ. (Namely, F(t, φ) 0 as t -> —00 and F(t, φ) ^ Θ as t ^ 00 for every φ in the open annulus). For every φ e W, Fw(t, φ) 0 as t —> —00. W is compact and contains the equilibria (£_, ξ-) and (£+, §+) in C. For every φ e W. (ξ-,ξ-)<φ<(ξ+,ξ+). There exist homeomorphisms from W and Dw onto the closed unit ball in R3, which send bdW = W \
W =
{ x +
w(x)\
χ e
dDw}
and
8DW
onto the unit sphere S2 C M3. If we define χ_ and χ+ by (ξ-, ξ~) = χ _ + ι υ ( χ - ) a n d ( ξ + , £ + ) = χ + + νυ(χ+), respectively, then the set dDw \ { χ - , χ+} is a 2-dimensional C 1 -submanifold of Gi, and the restriction of w to Dw \ {χ~, X+} is C1 -smooth (in the sense explained Q
0
'For a given set Μ of a topological space, we use M, W and 3Μ to denote the interior, closure and boundary of M, respectively.
160
5 Signal transmission delays
above for ws). The points φ e W \ (W Π S) above the separatrix S form a connected set and satisfy Fw(t, φ) —>• (ξ+, £+) as t —> oo, and all φ G W \ (W Π S) below the separatrix S form a connected set and satisfy Fw(t, φ) —> (£_, £_) as t —> oo. Finally, for every φ in the set bd W and different from £_) and £+)> Fw(t, φ) Θ as t —*• — oo. Combining all the results stated so far, one may visualize the invariant set W as a smooth solid spindle which is split by an invariant disk into the basins of attraction towards the tips (ξ-, ξ-) and (£+, £+), see Figure 5.8.4.
Figure 5.8.4. A smooth solid spindle in the attractor of a scalar delay positive feedback equation.
The first steps toward these results exploit the monotonicity of the semiflow. Also important is the observation that the set W is contained in the order interval between the stationary points (£_, ) and (£+ and that there are heteroclinic connections from 0 to (£_, ξ - ) and (£+, £+)> given by monotone solutions χ : R —>• R without zeros. An important tool for the investigation of finer structures is a version of the discrete Liapunov functional V : C \ {0} ->· Ν counting sign changes, which was introduced by Mallet-Paret [1988]. Related are α-priori estimates for the growth and decay of solutions with segments in (sub-)level sets of V, which go back to Mallet-Paret [1988] and in a special case to Walther [1977,1979]. These tools are first used to characterize the invariant sets W \ {0} and W Π 5 \ {0} as the sets of segments xt of solutions χ :R-+R with α-limit set {0} which satisfy V(xt) < 2 for all ί e R and V(xt)_ = 2 for all f e 1 , respectively. Moreover, nontrivial differences of segments in W and
5.8 Synchronized activities
161
W H S belong to V _ 1 ({0, 2}) and V _ 1 (2), respectively. The last facts permit to introduce global coordinates on W and W Π S. It is not difficult to show that the continuous linear evaluation map Π2 : C 9 0 ^
(0(0), 0 ( - l ) ) r G R 2
is injective on W Π S, and the continuous linear evaluation map Π3 : C —• R 3 given by n 3 0 = (0(O),0(-l),cp(0))r and (Pr/>0)(O =
-
^c P (d>)e k o t ,
where Prp is the spectral projection onto Ρ along Q φ L, is injective on W. The inverse maps of the restrictions of Π2 and Π3 to W Π S and W, respectively, turn out to be locally Lipschitz continuous. The next step leads to the desired graph representation. We embed R 3 d Π3 W and 2 R D Π 2 (W Π S) in a simple way as subspaces G 3 D G2 into C, so that representations by maps w and ws with domains in G3 and G2 and ranges in complements Ε of G3 in C and Ε Θ G\ of Gi in C, Gi c G3, become obvious. It is not hard to deduce that W is given by the restriction of w to an open set, and that this restriction is C 1 -smooth. On W, the semiflow extends to a flow Fw : 1 x F W, and Fw is C 1 -smooth on the C 1 -manifold R χ W. Phase plane techniques apply to the coordinate curves in R 2 (or G2) of the flowlines t (-> Fw(t, 0) in the invariant set W H S , and yield the periodic orbit 0 = (Wns)\(wn5) as well as the identification Π 2 (W Π S) = int(n 2 <9), which implies that also W Π S is given by the restriction of u>s to an open subset of G2.
The investigation of the smoothness of the part W Π 5 of the separatrix 5 in W and of the manifold boundary bd W = W\W starts with a study of the stability of the periodic orbit Θ. We use the fact that there is a heteroclinic flowline in W Π S from the stationary point 0 e C to the orbit Θ, i.e., in the level set V - 1 (2), in order to show that precisely one Floquet multiplier lies outside the unit circle. It also follows that the center space of the linearized period map, or monodromy operator Μ = D2F(CO,
po),
162
5 Signal transmission delays
given by po g Θ and the minimal period ω > 0, is at most 2-dimensional. The study of the linearized stability is closely related to earlier work in Lani-Wayda and Walther [1995] and to α-priori results on Floquet multipliers and eigenspaces for general monotone cyclic feedback systems with delay due to Mallet-Paret and Sell [1996a], To show that the graph WHS C V~l(2) U {0} is C 1 -smooth. We identify pieces of W Π S in a hyperplane Y transversal to Θ, as open sets in the smooth transversal intersection of W with the center-stable manifold of the Poincare return map which is associated with Y and a point po g Θ. Then we use the C'-flow Fw to obtain the smoothness of the set W Π S \ {0}. (Smoothness close to 0 and the relation T0(W nS) = L follow by other arguments.) Of course, this approach relies on a result on the existence of C 1 -smooth center-stable manifolds at fixed points of C J -maps in Banach spaces. A technical aspect of the smoothness proof for W Π 5 is that close to a fixed point of a C 1 -map with one-dimensional linear center space we have to construct open and positively invariant subsets of the center-stable manifold Wcs provided a single trajectory in Wcs but not in the strong stable manifold converges to the fixed point. Having established the smoothness of W Π S we show in a series of propositions that the manifold boundary bd W = W \ W (without the stationary points (ξ-, ξ_) and (ξ + , £+)) coincides with the forward extensions of a local unstable manifold of the period map F(a>, ·) at a fixed point po e Θ. The long proof of this fact involves the charts Π2 and Π3, and uses most of the results obtained before. The next step achieves the smoothness of a piece of W in a hyperplane Η of C transversal to the periodic orbit Θ. We construct a C 1 -smooth graph over an open set in a plane X12 C Η which extends such a piece of WC\ Η close to a point po e Θ Π Η beyond the boundary. Using the flow Fw we then derive that W and W Π S are C 1 -smooth up to their manifold boundaries, in the sense stated before. The final steps lead to the topological description of W. First we use the characterization of bd W mentioned above to define homeomorphisms from bd W onto the unit sphere S2 C M3. Then a generalization due to Bing [1983] of the Schoenfliess theorem from planar topology is employed to obtain homeomorphisms from W onto the closed unit ball in R 3 . The application of Bing's theorem requires to identify the bounded component of the complement of the set n 3 ( b d W) = S2 in R 3 as the set Π3 W, and to verify that Π3 W is uniformly locally 1-connected. This means that for every ε > 0 there exists S > 0 so that every closed curve in a subset of Π3 W with diameter less than δ can be continuously deformed to a point in a subset of Π3 W with diameter less than ε. We point that the proof of this topological property relies on the smoothness of the set
163
5.8 Synchronized activities
and involves subsets of boundaries of neighborhoods of 0, (£_, £_), (£+, ξ+) in W which are transversal to the flow Fw • In order to construct these smooth boundaries we have to go back to the variation-of-constants formula for retarded functional differential equations in the framework of sun-dual and sun-star dual semigroups (Diekmann, van Gils, Verduyn Lunel and Walther [1995]). We note that for some general sigmoid functions (including / ( £ ) = tanh(a£) with some a > 0) under the condition equivalent to Re λ 2 < 0, Krisztin and Walther [ 1999] proved that W is indeed the global attractor of the semiflow generated by (5.8.4). Note also that the solid spindle W loses its smoothness at the two tips ( ξ - , £_) and (ξ+, ξ+), the singularity at these two points have been completely characterized by Walther [1998], As a final remark, we note that system (5.8.3) can be rewritten as ε ώ ( ΐ ) = - ß w ( t ) + f(w(t
- 1))
with ε = τ - 1 . Therefore, the case of large delay corresponds to the situation where ε —>• 0. This motivates the study of Chen [1999] on the limiting profiles of the periodic solutions for the equation e i ( i ) = —x(t) + f(x(t
- 1), λ)
(5.8.8)
with f € C3(R χ M; R), / ( 0 , λ) = 0 and f{x, λ) = (1 + λ)χ + αχ2 + bx3 + o(x3) as χ
Ο.10
It was shown that in case where a = 0, there is a neighborhood U of (0,0) in the (λ, ε) plane and a sectorial region S in U such that if (λ, ε) e U, then there is a unique periodic solution of (5.8.8) with period 1 + ε + 0 ( | ε | ( | λ | + |ε|)) as (λ, ε) —> (0, 0) if and only if (λ, ε) € S. Moreover, if b < 0 then for small and fixed λ = λο > 0, the set {ε; (λο, ε) 6 5} is an interval (0, ε(λο)). At the point (λο, ε(λο)), there is a Hopf bifurcation of sine waves and the periodic solution approaches a square wave as ε —> 0. That is, the periodic solution χ χ 0 , ε ( 0 c u 0 (respectively, C2x0) as ε —> 0 uniformly on compact subsets of (0, (respectively, l)), where c u 0 and C2x0 are the two distinct nontrivial fixed points of / ( · , λο). If b > 0 then for small and fixed λ = λο > 0, the set (ε; (λο, ε) e S} is an interval (ε(λο), β(λο))· At the point (λο, ε (λο)), there is a Hopf bifurcation. For small and fixed λ = λο < 0, the set {ε; (λο, ε) e S} is an interval (0, α(λο)). As ε 0, the unique solution becomes pulse-like in the following sense: the periodic solution has the property that χχ0ι£(ί) —• 0 as ε —• 0 uniformly on compact subsets of (0, U l). The magnitude of the pulse exceeds max{|cu 0 |, k2x0l}· See Figures 5.8.5 and 5.8.6. 10
The case where
f(x, λ) = - ( 1 + λ)χ + αχ2 + bx3 -I- o(x3) as x ->• 0 was discussed by Chow, Hale and Huang [1992] and Hale and Huang [1994],
164
5 Signal transmission delays
Figure 5.8.5. Limiting profiles of periodic solutions when a = 0 and b < 0: square waves.
Figure 5.8.6. Limiting profiles of periodic solutions when a = 0 and b > 0: pulses. In the case where αφ 0, there exist a neighborhood U of ( 0 , 0 ) and a sectorial region S in U such that if (λ, ε) e U, then there is a unique periodic orbit, determined by the solution * λ > 6 of (5.8.8) with period 1 + ε + 0(|ε|(|λ| + |ε|)) as (λ, ε) (0,0) if and only if (λ, ε) e S. Moreover, for small and fixed λ = λο > 0, the set {ε; (λο, ε) € S} is an interval (ε(λο), β(λο)). At the point (λο, ε(λο)), there is a Hopf bifurcation. For small and fixed λ = λο < 0, the set {ε; (λο, ε) € 5} is an interval (0, α(λο)). As ε —>· 0, the unique periodic solution becomes pulse-like with the magnitude of the pulse exceeding |coa0I> where cox0 is the nontrivial fixed point of / ( · , λο) in a small neighborhood of 0. See Figure 5.8.7 for the limiting profiles in
165
5.9 Desynchronization, phase-locked oscillation and connecting orbits
such a case.
ε
χ ε(λ)
. ΛΠΜΙ - λ
Figure 5.8.7. Limiting profiles for the case where α φ 0.
5.9 Desynchronization, phase-locked oscillation and connecting orbits We continue our discussion about the following system of delay differential equations J ii(0 = - μ κ ( ί ) + f(v(t - τ)), [ v(t) = -μν(ί)
+ f(u(t
- τ)),
or equivalently (after rescaling the time variable) ü{t) = - μ τ Μ ( 0 + τ/(υ(ί - 1)), v{t) = -μτν(ί)
+ rf(u(t
- 1))
(5.9.1)
for a network of two identical neurons, where τ and μ are given positive constants and / : R Ε is a C'-map satisfying the following set of conditions: / ( 0 ) = 0, / ' ( ξ ) > 0 for all £ € R (Monotone Positive Feedback); /(§) = has exactly three zeros ξ ~ < 0 < £ + andmax{/'(£~), /'(£+)} < μ < /'(0) (Dissipativeness and Instability of 0); / ( § ) = - / ( - £ ) for all $ € R (Symmetry); The function (0, oo) 3 ξ (Concavity).
€ l i s monotonically decreasing
It should be mentioned that some results described in this section (obtained in Chen, Krisztin and Wu [1999]) do not require the symmetry and concavity conditions on / , and that the function / ( $ ) = arctan(y£), ξ e R
166
5 Signal transmission delays
or /(ξ)
= —
ζ-,
ξεΚ
satisfies the above set of conditions with γ > μ. Our ultimate goal is to describe the global attractor of system (5.9.1), and our focus here is on the existence of connecting orbits from a synchronous periodic solution to a phase-locked periodic solution, here a phase-locked 71-periodic solution of (5.9.1) is a solution satisfying u(t) = v(t — - j ) for all ί e R. The existence of such connecting orbits suggests a mechanism for desynchronization of the network. As shown in the last section, a synchronous solution of (5.9.1) is completely governed by the scalar delay differential equation for w(= u = v): + r f ( w ( t - 1)).
w(t) = -μτιν(ί)
(5.9.2)
The zeros of the characteristic equation λ + μτ-
τ/'(0)e~k = 0
(5.9.3)
form the spectrum of the generator As of the solution semiflow of the linear equation Z(f) = - μ τ Ζ ( ί ) + τ/'(0)Ζ(ί - 1).
(5.9.4)
All zeros of (5.9.3) are simple eigenvalues of As. There is one real eigenvalue, the other eigenvalues form complex conjugate pairs with real parts less than the value of the real eigenvalue. Defining 2π — arccos y j j ^
Απ — arccos y^gj
f
V [ / ' ( 0 ) ] 2 - μ2 '
Τί
~
VU'm2
- μ1 '
one finds that for τ > xs the realified generalized eigenspace of As associated with the spectral set of the eigenvalues with positive real part is at least 3-dimensional, while for rs < τ < x's it is exactly 3-dimensional. When τ > ts, there exists a A
3-dimensional local unstable manifold W ^ j o c of the solution semiflow generated by equation (5.9.2), which is, at zero, tangent to the 3-dimensional realified generalized eigenspace of the generator As associated with the positive real eigenvalue and the pair of complex conjugate eigenvalues with the greatest real part. We know that W3tS is a 3-dimensional submanifold of C which contains a smooth invariant disk bordered by a periodic orbit. The disk separates W34i into halves: one half contains connecting orbits from the stationary point 0 or the periodic orbit to a positive stationary point, the other half contains connecting orbits from the stationary point 0 or the periodic orbit to a negative stationary point. Moreover, if τ e (τΛ, τ5') then W ^ j is indeed the global attractor of the solution semiflow of equation (5.9.2).
167
5.9 Desynchronization, phase-locked oscillation and connecting orbits
It is interesting to note that the characteristic equation of the linearization at 0 of the full system (5.9.1), namely, ί X(t) = -μτΧ(ί) I Y(t) = -μτΥ{t)
+ Tf'(0)Y(t + zf'(0)X(t
- 1), - 1)
1
""
;
can be decoupled as [λ + μτ - τ/'(0)- λ ][λ + μτ + τf'(0)e~k]
= 0,
(5.9.6)
where the first factor corresponds to the characteristic equation (5.9.3). Defining td =
it — arccos j f j ^ Arm
1
-!*
2
'
one sees that for τ > the zeros of the equation λ + μτ + τ f'(0)e~k = 0 are simple and occur in complex conjugate pairs, at least one pair of complex conjugate zeros has positive real part, while for 0 < τ < r^ all zeros have nonpositive real parts. In Chen and Wu [1998], it was proved that when r > r j , there exists a complete analogue W o f W3,i for the full system (5.9.1). In particular, this is separated by a smooth invariant disk borded by a phase-locked periodic orbit OiConsequently, when τ > τ^(> %d), the unstable space of the generator of the Co-semigroup generated by (5.9.5) is at least 5-dimensional and there exists a 5dimensional local unstable manifold \V5j0c tangent, at zero, to the realified generalized eigenspace of the generator associated with the positive real and the two leading pairs of complex conjugate eigenvalues with positive real parts. The global forward extension Λ
Λ
Λ
Λ
Λ
Λ
of W5,ioc contains Ψτ,,ά and W-i<s = {(ψ, φ)\ φ € Wß.j}, and in particular, W5 A
A
contains a phase-locked periodic orbit O2 and a synchronous periodic orbit O4. Chen, Krisztin and Wu [1999] described completely the global dynamics of the semiflow of system (5.9.1) restricted to W5 and established the existence of heteroclinic orbits from the synchronous periodic orbit O4 to the phase-locked periodic orbit 02· The purpose of this section is to briefly describe the main results, technical tools and approach of the above work. First of all, we make the following change of variables
I
x (t) = u (2i), y(t) = v(2t - 1)
to get an equivalent cyclic system of delay differential equations ί x(t) = -2μτχ(ί) + 2τ/(γ(ΐ)), I y(t) = —2μτγ(ί) + 2rf(x{t -
f4- Q
7)
1)).
Powerful technical tools and general results have been developed in the recent work of Mallet-Paret and Sell [1996a, b].
168
5 Signal transmission delays
In what follows, we are going to describe the results only for the transformed system (5.9.7), with Οa, 02, W5 and denoting the analogues of O4, Ö2, W^^, W5 and W3;ii, respectively. The fundamental technical tool for (5.9.7) is an integer-valued Liapunov functional V : C(K) \ {0} -> {0, 2 , . . . , 00}, where Κ = [ - 1 , 0] U {1}, C(K) = {φ : Κ ->• Ε; φ is continuous } is the phase space of system (5.9.7) and V(
W5 \ {0} =
φ e C(K) \ {0};
there exists a solution ζφ : Μ Ε2 v of (5.9.7) with z Q = φ, V ( z f ) < 4 for all t e Ε and zf —> 0 as t ——00
Also (T2)
V(
if φ, f € W5 and φ φ ψ.
Another important observation we make is that every non-constant periodic solution of (5.9.1) is either synchronous or phase locked. This, together with the concavity and symmetry conditions, enables us to apply the results of Krisztin and Walther [1999] about the uniqueness of periodic solutions in each level set of V to show that (
there exists at most one periodic orbit in each level set V~l(2k), k = 1 , 2 , . . . , of V.
'
One can also apply this Liapunov functional to show that (T5)
there exists no homoclinic connections of O2 and O4.
The most important application of V is perhaps the proof of the Poincare-Bendixson Theorem for cyclic systems (Mallet-Paret and Sell [1996b]) which, together with the existence of a heteroclinic connection from 0 to non-trivial equilibria of (5.9.7) (Hirsch [1988] or Smith [1987]), enables us to completely describe the dynamics of the semiflow on W5. To be more precise, we introduce the separatrix S — {φ € C(K); φ = 0 or V ( z f ) > 0 for all t > 0}. Then we show that (T6) if φ e (W5 r\S)\
{0}, then α (φ) = {0}, ω(φ) = 02 or O4,
if φ G W5 \ S, then α (φ) = {0}, ω (φ) = {χ+} or {χ_}, if φ G bd ήφζ
\ (S U {χ-, χ+}), then α (φ) = 0 4 or 02, ω{φ) = {χ+} or {χ_},
(bd W5 (Ί S) \ { 0 2 U O4}, then α (φ) = 0 4 and ω (φ) = 02,
5.9 Desynchronization, phase-locked oscillation and connecting orbits
where bd W5 =
169
W5\W5.
To show that there is indeed a connecting orbit from Ο4 to O2, we need further information about the Floquet multipliers of O4. We apply the general theory of Mallet-Paret and Sell [1996a] and some type of homotopy argument to show that
(T7)
there exist exactly 3 Roquet multipliers of O4, counting multiplicities, outside the unit circle, and the associated realified generalized eigenspace is contained in V - 1 ({0, 2}) U {0}.
An immediate consequence of (T7) implies that there exists a 3-dimensional C 1 smooth local unstable manifold W ^ i P ) of a Poincare-map Ρ associated with a certain hyperplane. This manifold contains at least one point of a connecting orbit from O4 to χ+ and at least one point of a connecting orbit from O4 to χ _ . Since W^iP) is 3dimensional, there exists a continuous curve in W ^ i P ) \ Ο4 connecting these points, and a continuity argument shows that the curve contains a point φ e W^iP) Π S, which gives that (T8)
there exists a connecting orbit from Ο4 to O2·
The above results can be depicted in Figure 5.9.1.
Figure 5.9.1. Two solid spindles and their connecting orbits in a system of two identical neurons. The left spindle is separated by a disk bordered by a phaselocked oscillation, and the right spindle is separated by a synchronous periodic orbit. There exist connecting orbits from the synchronized to the phase-locked orbit.
5 Signal transmission delays
170
There are several questions remaining to be answered. Denoting by W u (04) the forward extension of we conjecture that (bd
η S) \ {02 υ o4} = (wu(04) η S) \ {o2 υ oA} = {φ I α(φ) = and ω(φ) = O2}.
Also, note that ψ e (W5 Π S) \ {0}
ω(φ) = 02 or 0 4 .
It is thus important to describe Hi = {
i = 2,4
(the basins of attraction of 0,· in W5). Finally, we note that the next critical value of 3π—arccos x when (5.9.6) has a pair of purely imaginary zeros is t2 j = τ^). It is then reasonable to expect that W5 is the global attractor of (5.9.1) if r e (Ts, Limiting profiles of the synchronous periodic orbit and the phase-locked periodic solution have been described in Chen and Wu [1999c], pulses and square waves are both observed.
Bibliography
D. Η. Ackley, G. H. Hinton and T. J. Sejnowski [1985], A learning algorithm for Boltzman machines. Cognitive Science 9, 147-176. U. an der Heiden [1980], Analysis of Neural Networks. Lecture Notes in Biomath. 35, Springer-Verlag, Berlin. P. Andersen, G. N. Gross, T. Lomo and 0. Sveen [1969], Participation of inhibitory and excitatory interneurons in the control of hippocampus cortical output. In: The Interneuron (Μ. Brazier, ed.). University of California Press, Los Angeles, 415-465 J. A. Anderson, J. W. Silverstein, S. A. Ritz and R. S. Jones [1977], Distinctive features, categorical perception, and probability learning: some applications of a neuron model. Psychol. Rev. 84, 413-451. K. L. Babcock and R. M. Westevelt [1987], Dynamics of simple electronic neural networks. Phys. D 28, 305-316. P. Baldi and A. F. Attyia [1994], How delays effect neural dynamics and learning. IEEE Trans. Neural Networks 5, 612-621. J. Belair [1993], Stability in a model of a delayed neural network. J. Dynam. Differential Equations 5, 607-623. J. Belair, S. A. Campbell and P. van den Driessche [1996], Frustration, stability and delayinduced oscillations in a neural network model. SIAM J. Appl. Math. 46, 245-255. J. Belair and S. Dufour [1996], Stability in a three dimensional system of delay-differential equations. Canad. Appl. Math. Quart. 4, 137-154. A. Berman and R. J. Plemmons [1979], Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York. R. H. Bing [1983], The Geometric Topology of3-Manifolds. Amer. Math. Soc. Colloq. Publ. 40. Providence, RI. L. P. Burton and W. M. Whyburn [1952], Minimax solutions of ordinary differential systems. Proc. Amer. Math. Soc. 3, 794-803. S. Campbell and J. Belair [1995], Analytical and symbolically-assisted investigation of Hopf bifurcations in delay-differential equations. Canad. Appl. Math. Quart. 3, 137-154. J. Cao and D. Zhou [1998], Stability analysis of delayed cellular neural networks. Neural Networks 11, 1601-1605. G. A. Carpenter [1994], A distributed outstar network for spatial pattern learning. Neural Networks 7, 159-168. Y. Chen [1999], Limiting profiles for periodic solutions of scalar delay differential equations. To appear in Proc. Roy. Soc. Edinburgh.
172
Bibliography
Y. Chen, Τ. Krisztin and J. Wu [1999], Connecting orbits from synchronous periodic solutions to phase-locked periodic solutions in a delay differential system. J. Differential Equations 163,130-173. Y. Chen and J. Wu [1998], Existence and attraction of a phase-locked oscillation in a delayed network of two neurons. To appear in Adv. Differential Equations. Y. Chen and J. Wu [1999a], Minimal instability and unstable set of a phase-locked periodic orbit in a delayed neural network. Phys. D 134, 185-199. Y. Chen and J. Wu [1999b], On a unstable set of a phase-locked periodic orbit of a network of two neurons. To appear in J. Math. Anal. Appl. Y. Chen and J. Wu [1999c], The asymptotic shapes of perioic solutions of a singular delay differential system. To appear in J. Differential Equations. S. N. Chow, J. K. Hale and W. Huang [1992], From sine waves to square waves in delay equations. Proc. Roy. Soc. Edinburgh 120A, 223-229. L. O. ChuaandL. Yang [1988a], Cellular neural networks: theory. IEEE Trans. Circuits Systems 135, 1257-1272. L. O. Chua and L. Yang [1988], Cellular neural networks: applications. IEEE Trans. Circuits Systems 135, 1273-1290. P. S. ChurchlandandT. J. Sejnowski [1992], The Computational Brain. MIT Press, Cambridge. M. A. Cohen [1990], The stability of sustained oscillation in symmetric coopertive-competitive networks. Neural Networks 3, 609-612. M. A. Cohen and S. Grossberg [1983], Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Systems Man Cybernet. 13, 815-826. F. H. C. Crick and C. Asanuma [1988], Certain aspects of the anatomy and physiology of cerebral cortex. In: Parallel Distributed Processing (J. L. McClell and D. E. Rumelhart, eds.). The MIT Press, Cambridge, Vol. 2, 333-371. DARPA [1988], DARPA Neural Network Study — Final Report (Technical Report 840). Lincoln, MA: MIT Lincoln Laboratory. O. Diekmann, S. A. van Gils, S. M. Verduyn Lunel and H.-O. Walther [1995], Delay Equations: Functional-, Complex-, and Nonlinear Analysis. Appl. Math. Sei. 110. Springer-Verlag, New York. R. D. Driver [1977], Ordinary and Delay Differential Equations. Appl. Math. Sei. 20. SpringerVerlag, New York. J. C. Eccles, M. Ito and J. Szentagothai [1967], The Cerebellum as a Neuronal Machine. Springer-Verlag, New York. L. H. Erbe, Κ. Geba, W. Krawcewicz and J. Wu [1992], 5 1 -degree and global Hopf bifurcation theory of functional differential equations. J. Differential Equations 98, 277-298. T. Faria and L. T. Magalhäes [1995], Normal forms for retarded functional differential equations with parameters and applications to Hopf bifurcations. J. Differential Equations 122, 181-200.
B. Fiedler and T. Gedeon [1998], A class of convergence neural network dynamics. Phys. D 111, 288-294.
Bibliography
173
J. Μ. Fuster, R. H. Bauer and J. P. Jerrey [ 1982], Cellular discharge in the dorsolateral prefrontal cortex of the monkey during cognitive tasks. Exp. Neurol. 77, 679-694. T. Gedeon [1999], Structure and dynamics of artificial neural networks. In: Differential Equations with Applications to Biology (S. Ruan, G. Wolkowicz and J. Wu, eds.). Fields Inst. Commun. 21. Amer. Math. Soc., Providence, RI, 217-224. F. Giannakopoulos and A. Zapp [1999], Local and global Hopf bifurcation in a scalar delay differential equation, Adv. Differential Equations 237, 425-450. M. Gilli [1993], Strange attractors in delayed cellular neural networks, IEEE Trans. Circuits Systems 140, 849-853. P. S. Goldman-Rakic [1984], Modular organization of prefrontal cortex, Trends Neurosci. 7, 419-429. K. Gopalsamy and X.-Z. He [1994], Stability in asymmetric Hopfield nets with transmission delays, Phys. D 76, 344-358. S. Grossberg [1967], Nonlinear difference-differential equations in predictions and learning theory, Proc. Nat. Acad. Sei. U.S.A. 58, 1329-1334. S. Grossberg [1968a], Some physiological and biochemical consequences of psychological postulates, Proc. Nat. Acad. Sei. U.S.A. 60, 758-765. S. Grossberg [1968b], Global ratio limit theorems for some nonlinear functional-differential equations, I, II. Bull. Amer. Math. Soc. 74, 95-99; 100-105. S. Grossberg [1968c], A prediction theory for some nonlinear functional-differential equations, I. J. Math. Anal. Appl. 21, 643-694. S. Grossberg [1968d], A prediction theory for some nonlinear functional-differential equations, II. J. Math. Anal. Appl. 22, 490-527. S. Grossberg [ 1969], On learning and energy-entropy dependence in recurrent and nonrecurrent signed networks. J. Statist. Physics 1, 319-350. S. Grossberg [1970a], Some networks that can learn, remember, and reproduce any number of complicated patterns, II. Stud. Appl. Math. 49, 135-166. S. Grossberg [1970b], Embedding fields: underlying philosophy, matehmatics, and applications to psychology, physiology and anatomy. J. Cybernet. 1, 28-50. S. Grossberg [1970c], Neural pattern discrimination. J. Theor. Biol. 27, 291-337. S. Grossberg [1971a], Pavlovian pattern learning by nonlinear neural networks. Proc. Nat. Acad. Sei. U.S.A. 68, 828-831. S. Grossberg [1971b], On the dynamics of operating conditioning. J. Theor. Biol. 33,225-255. S. Grossberg [1972], Pattern learning by functional-differential neural networks with arbitrary path weights. In: Delay and Functional Differential Equations and Their Applications (K. Schmitt, ed.), Academic Press, New York, 121-160. S. Grossberg [1973], Contour enhancement, short term memory and constancies in reverberating neural networks. Stud. Appl. Math. 52, 217-257. S. Grossberg [1978a], A theory of human memory: self-organization and performance of sensory-motor codes, maps, and plans. In: Progr. Theoret. Biology (R. Rosen and F. Snell, eds.), Vol. 5, 233-274.
174
Bibliography
S. Grossberg [1978b], Competition, decision, and consensus. J. Math. Anal. Appl. 66,447-493. S. Grossberg [1982], Studies of Mind and Brain. Boston Stud. Philos. Sei. 70. D. Reidel Publishing Co., Boston. S. Grossberg [1988], Nonlinear networks: principles, mechanisms, and architectures. Neural Networks 1, 17-61. J. K. Hale [1969], Ordinary Differential Equations. Wiley-Interscience, New York. J. K. Hale [1977], Theory of Functional Differential Equations. Appl. Math. Sei. 3. SpringerVerlag, New York. J. K. Hale [1988], Asymptotic Behavior of Dissipative Systems. Math. Surveys Monogr. 25. Amer. Math. Soc., Providence, RI. J. K. Hale and W. Huang [1994], Period doubling in singularly perturbed delay equations. J. Differential Equations 114, 1-23. J. K. Hale and S. M. Yerduyn Lunel [1993], Introduction to Functional Differential Equations. Appl. Math. Sei. 99. Springer-Verlag, New York. R. L. Harvey [ 1991 ], Recent advances in neural networks for machine visions. Proc. of the 3rd Biennial Acoustics, Speech and Signal Processing Mini Conference, IV.1-IV.6. Weston, MA. R. L. Harvey [1994], Neural Network Principles. Prentice-Hall Inc., New Jersey. S. Haykin [1994], Neural Networks, A Comprehensive Foundation. Macmillan College Publishing, New York. D. O. Hebb [1949], The Organization of Behavior. John Wiley & Sons, New York. Α. V. M. Herz [1996] Global analysis of recurrent neural networks. In: Models of Neural Networks III (E. Domany, J. L. van Hemmen and K. Schulten, eds.). Springer-Verlag, Berlin, 1-54. Α. V. M. Herz, Β. Salzer, R. Kühn and J. L. van Hemmen [1989], Hebbian learning reconsidered: representation of static and dynamic objects in associative neural nets. Biol. Cybern. 60,457-467. Μ. Hirsch[ 1984], The dynamical systems approach to differential equations. Bull. Amer. Math. Soc. 11, 1-64. M. Hirsch [1985], Systems of differential equations which are competitive or cooperative II: Convergence almost everywhere. SIAM J. Math. Anal. 16,432-^439. M. Hirsch [1987], Convergence in neural networks. In: Proc. of the 1st Int. Conf. on Neural Networks. San Diego (M. Caudill and C. Butter, eds.), IEEE Service Center, 115-126. M. Hirsch [1988], Stability and convergence in strongly monotone dynamical systems. J. Reine Angew. Math. 383, 1-53. M. Hirsch [1989], Convergent activation dynamics in continuous time networks. Neural Networks 2, 331-349. M. Hirsch and S. Smale [1974], Differential Equations, Dynamical Systems, and Linear Algebra. Academic Press, New York. A. L. Hodgkin and Α. Ε Huxley [1952], A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500-544.
Bibliography
175
J. J. Hopfield [1982], Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sei. U.S.A. 79, 2554-2558. J. J. Hopfield [1984], Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Nat. Acad. Sei. U.S.A. 81, 3088-3092. F. C. Hoppensteadt [1997], An Introduction to the Mathematics of Neurons, Modeling in the Frequency Domain (2nd edition). Cambridge University Press, New York. L. Huang and J. Wu [1999a], Irregular outstars for learning/recalling dominated spatial patterns. Preprint. L. Huang and J. Wu [1999b], The role of threshold in preventing delay-induced oscillations of frustrated neural networks with McCulloch-Pitts nonlinearity. Preprint. L. Huang and Wu [2000], Dynamics of inhibitory artificial neural networks with threshold nonlinearity. To appear in Fields Inst. Commun. L. Huang and Wu [2001], Nonlinear waves in networks of neurons with delayed feedback: pattern function and continuation. Preprint. D. H. Hubel and Τ. N. Wiesel [1962], Receptive fields, binocular interaction, and functional architecture in the cat's visual cortex. J. Physiol. 160, 106-154. D. H. Hubel and Τ. N. Wiesel [1965], Receptive fields and functional architecture in two non-striate visul areas of the cat. J. Neurophysiol. 28, 229-298. C. R. Johnson [1988], Combinatorial matrix analysis: an overview. Linear Algebra Appl. 107, 3-10. E. Kamke [1932], Zur Theorie der Systeme gewöhnlicher Differentialgleichungen, II. Acta Math. 58, 57-85. D. Kernell [1965], The adaptation and the relation between discharge frequency and current strength of cat lumbosacral motoneurones stimulated by long-lasting injected currents. Acta Physiol. Scand. 65, 65-73. T. Krisztin and H.-O. Walther [1999], Unique periodic orbits for delayed positive feedback and the global attractor. To appear in J. Dynam. Differential Equations. T.Krisztin, H.-O. Walther and J. Wu [1999], Shape, Smoothness and Invariant Stratification of an Attracting Set for Delayed Monotone Positive Feedback. Fields Inst. Monogr. 11. Amer. Math. Soc., Providence, RI. B. Lani-Wayda and Walther [1995], Chaotic motion generated by delayed negative feedback, part I: a transversality criterion. Differential Integral Equations 8, 1407-1452. J. P. LaSalle [1976], The Stability of Dynamical Systems. Regional Conference Series in Applied Mathematics 25. SLAM, Philadephia. D. S. Levine [1991], Introduction to Neural and Cognitive Modelling. Lawrence Erlbaum Associate, Inc., New Jersey. R. H. MacArthur [1969], Species packing, or what competition minimizes. Proc. Nat. Acad. Sei. U.S.A. 54, 1369-1375. Μ. Mackey and L. Glass [1977], Oscillation and chaos in physiological control systems. Science 197, 287-289.
176
Bibliography
J. Mallet-Paret [1988], Morse decompositions for differential delay equations. J. Differential Equations 72, 270-315. J. Mallet-Paret and G. Sell [1996a], Systems of differential delay equations: Roquet multipliers and discrete Lyapunov functions. J. Differential Equations 125, 385-440. J. Mallet-Paret and G. Sell [1996b], The Poincare-Bendixson theorem for monotone cyclic feedback systems with delay. J. Differential Equations 125, 441-489. L. Marcus [1956], Asymptotically autonomous differential systems. In: Contributions to the Theory of Nonlinear Oscillations, Vol. 3, Princeton University Press, Princeton, New Jersey, 17-29. C. M. Marcus and R. M. Westervelt [1988], Basins of attraction for electronic neural networks. In: Neural Information Processing Systems (D. Z. Anderson, ed.), American Institute of Physics, New York, 524-533. C. M. Marcus and R. M. Westervelt [1989], Stability of analog neural networks with delay. Phys. Rev. A 39, 347-359. R. M. May and W. J. Leonard [1975], Nonlinear aspects of competition between three species. SIAM J. Appl. Math. 29, 243-253. W. McCulloch and W. Pitts [ 1943], A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophysics 5, 115-133. E. J. McShane [1944], Integration. Princeton University Press, Princeton, New Jersey. K. Mischaikow, H. L. Smith and H. R. Thieme [1995], Asymptotically autonomous semiflows: chain recurrence and Liapunov functions. Trans. Amer. Math. Soc. 347, 1669-1685. M. Morita [1993], Associative memory with non-monotone dynamics. Neural Networks 6, 115-126. V. B. Mountcastle [1957], Modality and topographic properties of single neurons of cat's somatic sensory cortex. J. Neurophysiol. 20, 408-434. M. Müller [1926], Über das Fundamentaltheoreme in der Theorie der gewöhnlichen Differentialgleichungen. Math. Z. 26, 619-645. B. Müllerand J. Reinhardt [1991], Neural Networks, An Introduction. Springer-Verlag, Berlin. L. Olien and J. Beiair [1997], Bifurcations, stability, and monotonicity properties of a delayed neural network model. Phys. D 102, 349-363. K. Pakdaman, C. P. Malta, C. Grotta-Ragazzo and J.-F. Vibert [1996], Asymptotic behavior of irreducible excitatory networks of graded-response neurons. Technical Report IFUSPIP1200, University de Säo Paulo. Κ. Pakdaman, C. P. Grotta-Ragazzo, K. Malta and J.-F. Vibert [1997a], Delay-induced transient oscillations in a two-neuron network. Resenhas 3,45-54. K. Pakdaman, C. P. Malta, C. Grotta-Ragazzo and J.-F. Vibert [1997b], Effect of delay on the boundary of the basin of attraction in a self-excited single graded-response neuron. Neural Comput. 9, 319-336. K. Pakdaman, C. P. Malta, C. Grotta-Ragazzo, O. Arino and J.-F. Vibert [1997c], Transient oscillations in continuous-time excitatory ring neural networks with delay. Phys. Rev. Ε 55, 3234—3248.
Bibliography
177
Κ. Pakdaman, C. Grotta-Ragazzo and C. P. Malta [1998a], Transient regime duration in continuous-time neural networks withd delay. Phys. Rev. Ε 58, 3623-3627. Κ. Pakdaman, C. Grotta-Ragazzo, C. P. Malta, O. Arino and J.-F. Vibert [1998b], Effect of delay on the boundary of the basin of attraction in a system of two neurons. Neural Networks 11, 509-519. F. J. Pineda [1987], Generalization of back-propagation to recurrent neural networks. Phys. Rev. Lett. 59, 2229-2232. D. Purves, G. Augustine, D. Fitzpatrick, L. Katz, A. LaMantia and J. McNamara [1997], Neuroscience. Sinauer Association, INC., Massachusetts. W. Rail [1955], Experimental monosynaptic input-output relations in the mammalian spinal cord. J. Cell. Comp. Physiol. 46, 413^37. P. H. Raven and G. B. Johnson [1995], Biology (3rd edition). Wm. C. Brown Publishers, Iowa. F. Ratcliff [ 1965], Mach Bands: Quantitative Studies of Neural Networks in the Retina. HoldenDay, San Francisco. C. E. Rosenkilde, R. H. Bauer and J. M. Fuster [1981], Single cell activity in ventral prefrontal cortex of behaving monkeys. Brain Res. 209, 375-394. T. Roska and L. O. Chua [1992], Cellular neural networks with non-linear and delay-type template elements and non-uniform grids. Internat. J. Circuit Theory Appl. 20,469-481. T. Roska and J. Vandewalle [1993], Cellular Neural Networks. John Wiley & Sons, New York. T. C. Ruch, Η. D. Patton, J. W. Woodbury and A. L. Towe [1961], Neurophysiology. W. B. Saunders, Philadephia. J. Selgrade [1980], Asymptotic behavior of solutions to single loop positive feedback systems. J. Differential Equations 38, 80-103. P. Schuster, K. Sigmund and R. Wolff [1979], On ω-limits for competition between three species. SIAM J. Appl. Math. 37, 49-54. J. Smillie [1984], Competitive and cooperative tridiagonal systems of differential equations. SIAM J. Math. Anal. 15, 530-534. H. L. Smith [1987], Monotone semiflows generated by functional differential equations. J. Differential Equations 66, 420-442. H. L. Smith [1988], Systems of ordinary differential equations which generate an order preserving flow, a survey of results. SIAM Review 30, 87-113. H. L. Smith [1991], Convergent and oscillatory activation dynamics for cascades of neural nets with nearest neighbor competitive or cooperative interactions. Neural Networks 4, 41-46. H. L. Smith [1995], Monotone Dynamical Systems: An Introduction to the Theory of Competitive and Cooperative Systems. Math. Surveys Monogr. 41. Amer. Math. Soc., Providence, RI. H. Stech [1985], Hopf bifurcation calculations for functional differential equations. J. Math. Anal. Appl. 109,472-491. C. Stefanis [1969], Interneuronal mechanisms in the cortex. In: The Interneuron (Μ. Brazier, ed.). University of California Press, Los Angeles, 497-526.
178
Bibliography
P. van den Driessche and X. F. Zou [1998], Global attractivity in delayed Hopfiled neural network models. SI AM J. Appl. Math. 58, 1878-1890. H.-O. Walther [1977], Über Ejektivität und periodische Lösungen bei Funktionaldifferentialgleichungeng mit verteilter Verzögerung. Habilitationsschrift, Universität München. H.-O. Waither [1979], On instability, ω-limit sets and periodic solutions of nonlinear autonomous differential delay equations. In: Functional Differential Equations and Approximation of Fixed Points (H.-O. Peitgen and H.-O. Walther, eds.). Lecture Notes in Math. 730, Springer-Verlag, New York, 489-503. H.-O. Walther [1998], The singularities of an attractor of a delay differential equations, Funct. Differential Equations 5, 513-548. H. R. Wilson and J. D. Cowan [1972], Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 12, 1-24. J. Wu [1998], Symmetric functional differential equations and neural networks with memory. Trans. Amer. Math. Soc. 350, 4799-4838. J. Wu [1999], Dynamics of neural networks with delay: attraction and content-addressable memory. In: Proc. Internat. Conference on Dynamical Systems and Nonlinear Mechanics (K. Vajravelu, ed.), Florida, 1999. J. Wu, T. Faria and Y. Huang [1999], Synchronization and stable phase-locking in a network of neurons with memory, Math. Comput. Modelling 30, 117-138. S. Yoshizawa, M. Morita and S. I. Amari [1993], Capacity of associative memory using a nonmonotonic neural model. Neural Networks 6, 167-176.
Index
α-limit set, 94 5 1 -equivariant degree, 111 Τ-periodic orbit, 94 ω-limit point, 62 ω-limit set, 62, 94 0-1 distribution, 43 Boltzman machine, 70 Brain-State-in-a-Box (BSB), 69 CS, 27, 59 Cohen-Grossberg Convergence Theorem, 66 Cohen-Grossberg system, 64 CAM, 61 Eigen and Schuster equation, 69 Floquet multiplier, 94, 161, 169 GA, 51, 53 Gilpin and Ayala system, 69 Gronwall's inequality, 27 Grossberg's Learning Theorem, 57, 58 Hartline-Ratliff-Miller equation, 69 Hebb's law, 11 Hodgkin-Huxley model, 11 Hopf bifurcation, 89, 105 Kirchhoff's law, 78 LaSalle's Invariance Principle, 62, 63 Liapunov (energy) function, 62,63,70,72, 74, 75 Liapunov functional, 97, 168 May and Leonard's model, 71 McCulloch-Pitts model, 15, 69, 117 Pavlovian conditioning, 27 Perron property, 103 Perron-Firobenius theorem, 84 Poincare-Bendixson, 168 Poincare-map, 169 Ranvier node, 5 UCR, 27 UCS, 59 Wilson-Cowan equation, 18 Volterra-Lotka equation, 68 absolute refractory period, 4
absolutely globally attractive, 98 action potential, 2, 3, 9 activation dynamics, 23 activation function, 16 activity, 17 additive STM, 11 admissible, 66 amplification function, 16 amplifier, 77 artificial neural network, 8 associative memory, 146 asymptotically autonomous, 46 asymptotically stable, 61, 83, 94, 99,107 asynchronous equilibria, 120 asynchronous, 89, 119 attractivity, 95, 98 axon hillock, 2 axon, 1 backward propagation, 53 basin boundaries, 146 basin of attraction, 83, 94,146,149,170 bias, 17 brightness contrast, 36 canonical form, 80 capacitor, 77 cascade of neural nets, 52 cascade of neurons, 19 cell body, 1 cell, 8 cellular neural network, 15 center manifold, 108 center-stable manifold, 162 central nervous system, 7 chaos, 89 characteristic equation, 99, 167 characteristic function, 156 characteristic value, 99 chemical synapse, 2 chord cycle, 72 chord, 72
180 circuit diagram, 77 circuit, 7 combinatorial matrix theory, 71 combinatorially symmetric, 71 command neuron, 24 computational performance, 90 concavity, 165 conditional stimulus, 27, 57 connected, 72 content, 61 content-addressable memory, 67 contour enhancement, 40,49 contour enhancing, 49 contraction, 96 contractive neural network, 88 converge, 82 current-voltage characteristics, 3 cycle, 104 cyclic system, 168 cytoplasm, 2 delay differential equation, 90, 91 delay distribution, 91 delay-induced instability, 101 dendrite, 1 depolarization, 3 desynchronization, 89, 119, 166 digraph, 71 directed graph (digraph), 103 direction, 108, 109 discrete Liapunov functional, 160 discrete, 90 dissipativeness, 165 domain of absorption, 156 domain of attraction, 89 energy, 70 equilibrium, 2, 61, 94 eventually strongly monotone, 89, 98, 147 excitatory, 6 fair, 40, 42,49 feature detector, 51 feedback network, 18, 21 feedforward network, 18 frustrated network, 89, 104 frustration, 89
Index fully connected, 20 fully frustrated, 104 generator, 156 generic algorithm (GA), 51, 53 generic convergence, 98 global Hopf bifurcation theorem, 111 global attractor, 88, 95, 157 global continuation, 110 global coordinate, 161 global existence, 110 global pattern formation, 68 gradient-descent, 53 graph representation, 156, 161 graph theoretic test, 82 heteroclinic connection, 160 hidden neuron, 19 homotopy, 169 hyperpolarization, 3 impedance, 77 impulse, 2, 3 inhibitory signal threshold, 68 inhibitory, 6 initial time, 92 initial value problem, 92 initial value, 92 input layer, 18 input neuron, 18, 24 input-output function, 16 instar, 30 invariant disk, 160 invariant, 62, 94 ion channel, 3 ion pump, 3 irreducible, 72, 82 joint activation-weight dynamics, 22 labeled graph, 17 learning, 30, 70 limiting equation, 30, 46 limiting profile, 163 linearization principle, 83 linearization, 99 linearized equation, 156 linearized stability, 162 linearly unstable, 83
Index local approximation, 157 locally Lipschitz continuous, 92 locally connected network, 20 locally uniform, 40 logistic function, 15 long-term memory, 9 luminance, 35 macromolecular evolution, 69 manifold boundary, 161 masking field model, 70 mathematical classification, 68 maximal interval of existence, 92 membrane equation, 11 modified reflectance coefficient, 32 monodromy operator, 95, 161 monotone semiflow, 98 monotone system, 62, 80 myelin sheath, 5 nearest neighbor interaction, 86 negative orbit, 94 nerve cell, 1 neural gain, 15, 99 neural network, 17 neuron, 1, 8 neurotransmitter, 2, 5 node, 8 noise-saturation dilemma, 34 nonlinear amplifier, 77 normal form, 108 normalized, 40 on-center off-surround network, 34, 64 orbitally (asymptotically) stable, 94, 109 order-preserving, 40, 98 orientation, 51 output layer, 18 output neuron, 18 output variable, 17 outstar learning, 26 outstar, 24 overshooting, 4 pairing, 57 partial ordering, 79 partially connected, 20 passive decay LTM, 11
path, 71 pattern class, 33 pattern discrimiation, 60 pattern recognition, 32 pattern variable, 38 payoff function, 54 periodic solution, 88 peripheral nervous system, 7 persistent, 38 phase-locked, 88, 89, 166, 167 phase space, 91 piecewise linear function, 15 plasticity, 7 point dissipative, 95 population, 8 positive orbit, 94 positively invariant, 62 postsynaptic, 2 prediction, 60 presynaptic inhibition, 6 presynaptic, 2 propagation time delay, 11 pulse-like, 163 quenching, 40, 49, 50 realified generalized eigenspace, 156 recall, 26 receptive field, 20 recognization, 33, 70 recurrent network, 21 reflectance coefficients, 26, 31 relative activity, 35 relative refractory period, 4 relaxation time, 79 resistor, 77 resting potential, 3 saltatory conduction, 5 semiflow, 94 separatrix, 156 short-term memory, 9 shunting STM equation, 12 shunting equation, 11 shunting network, 34 sigmoid function, 15, 16 sign stability, 81
182 sign symmetric, 71, 86 sign symmetry, 81 signal function, 10, 17 signaling mechanism, 2 signed digraph, 104 simplex signal set, 76 single-layer network, 18 slowly oscillating, 113 solution, 92 soma, 1 spanning tree, 72 spatial pattern, 25, 26, 31 spectrum, 156 spikes, 3 spindle, 160, 169 spine, 2 square wave, 163 stability modulus, 83 stable matrix, 84 stable, 83, 94, 99 steady-state, 52, 61 step function, 15 strength, 17 strongly continuous semigroup, 156 strongly order-preserving, 82, 98 subcritical bifurcation, 108 submanifold, 157 subthreshold, 68 sun-dual, 163 sun-star, 163 supercritical bifurcation, 108 suprathreshold, 68 sustained oscillation, 138 switching speed, 90 symmetry, 165 synapse, 2, 17 synaptic coupling coefficient, 9 synaptic coupling, 50 synaptic ending, 2 synaptic gap, 2 synaptic transmission, 5 synaptic vesicle, 2
Index synchronization, 88, 89, 120 synchronized, 105, 116, 118 synchronous equilibria, 120 synchronous, 118, 155 system, 7 threshold, 11, 17, 116 total LTM trace, 26 total activity normalization, 35 total activity, 26, 38 total input, 26 trajectory, 62, 94 transcendental equation, 100 transient oscillaiton, 89, 138 transient region duration (TRD), 144 transient, 38 transition, 120 transmission velocity, 5 transversality condition, 106 tree, 72 tridiagonal, 73, 86 trivalent, 40 ultimate periodicity, 120 ultimately periodic, 116 unconditional response, 27 unconditional stimulus, 27, 57 undirected graph, 71 uniform, 40, 50 uniform distribution, 43 unit, 8 unstable dimension, 108 unstable space, 156 unstable, 83, 94, 99 voltage amplification function, 77 voting paradox, 71 weight dynamics, 23 weight, 9, 17 weighted sum, 31