该作业是用R语言预测投资回报
DERIVATIVES: ASSIGNMENT 3, PART B
HARJOAT S. BHAMRA
Part B needs to be written up and submitted online. Part B is partly a computational exercise, so submit
your code. The are Bonus Parts which you do not need to do, but doing them can boost your overall mark by
compensating for errors and omissions elsewhere.
(1) Consider an investor who does not consume or work to earn any labour income. All she does is invest her
financial wealth in N + 1 assets. The first asset is risk-free with return rtdt over the interval [t, t + dt).
The remaining N assets are risky and risky asset n ∈ {1, . . . , N} has return dRn,t, where
dRn,t = µn,tdt + σn,tdZn,t.
Zn is a standard Brownian motion under the physical probability measure P, such that Et[dZi,tdZj,t] =
ρij,tdt for i 6= j.
The investor invests the fraction φn,t of her date-t wealth in risky asset n.
(a) Explain why the return on the investor’s portfolio over the interval [t, t + dt) is given by
dRp,t =
1 −
XN
n=1
φn,t!
dt +
XN
n=1
φn,tdRn,t.
(b) Hence, show that
dRp,t = rdt + φ
>
t
(µ − r1)dt + φ
t
ΣdZt
,
where
µ
t
= (µ1,t, . . . , µN,t)
>,
1 = (1, . . . , 1)>,
Σt = diag(σ1,t, . . . , σN,t),
φ
t
= (φ1,t, . . . , φN,t)
>,
dZt = (dZ1,t, . . . , dZN,t)
>.
(c) The investor has the following objective function
Et[dRp,t] −
1
2
γV art[dRp,t],
where γ > 0. Explain the ideas behind the design of this objective function.
(d) Show that the optimal portfolio is obtained by solving the following linear-quadratic optimization
problem:
max
φt
φ
>
t
(µ
t
− 1rt) −
1
2
γφ>
t
V tφ
t
,
where V t is an N by N matrix such that
[V t]ij =
ρij,tσi,tσj,t , i 6= j
σ
2
i,t , i = j
,
and ρij,t = ρji,t.
(e) Show that the optimal portfolio vector is
φ
t
= V
−1
t
(µ
t
− 1rt).
(f) Use a computer algebra package such as Mathematica to calculate V
−1
t when N = 3 for special
cases of your choice. The obvious one to start with is ρ12,t = ρ13,t = ρ23,t = 0.
1
2 HARJOAT S. BHAMRA
(2) Suppose the risk-free rate r is constant and the cum-dividend return on the SP500 over the interval
[t, t + dt) is given by
dRt = (r + λvt)dt +
√
vtdZt,
where
dvt = κ(θ − vt)dt + σ
√
vtdZv,t,
where Z and Zv are standard Brownian motions under P such that
Et[dZtdZv,t] = −ρdt, ρ > 0.
A household’s optimal consumption-portfolio choice problem is given by the following optimal stochastic
control problem
Jt = sup
(Cs)s≥t
,(φs)s≥t
Et
Z ∞
t
e
−δ(s−t)u(Cs)ds
s.t.
dWt = WtdRp,t − Ctdt,
where φt is the fraction of date-t wealth invested in the SP500,
dRp,t = (r + φtλvt)dt + φt
√
vtdZt,
and
u(x) = x
1−γ
1 − γ
.
The supremized objective function is known as the value function, which we denote via J. The
date-t value function depends on date-t wealth and the date-t instantaneous variance vt via
Jt = H(vt)
γu(Wt),
where H(v) satisfies the following ordinary differential equation (ode)
0 = 1 −
”
1
γ
δ +
1 −
1
γ
r +
1
2
γv
λ
2 − γ
2
2
(1 − ρ
2
)
H0
(v)
H(v)
2
!!# H(v) + κ
0
(θ
0 − v)H0
(v)
+
1
2
2
vH00 (1) (v),
and
κ
0 = κ −
(γ − 1)λρ
γ
,
θ
0 =
κ
κ0
θ.
The optimal portfolio and consumption expenditure choices are given by
φt =
1
γ
λ − ρ
H0
(vt)
H(vt)
Ct =
1
H(vt)
Wt.
(a) For the special case γ = 1, solve the ordinary differential equation. Ensure your solution remains
finite as v → ∞. Hence, find the household’s optimal portfolio and consumption expenditure choices.
Explain the economics underlying your solution.
(b) Use your solution to provide wealth and expenditure management advice for a client with current
wealth of 1M GBP.
(c) By defining
k(v) = 1
γ
δ +
1 −
1
γ
r +
1
2
γv
λ
2 − γ
2
2
(1 − ρ
2
)
H0
(v)
H(v)
2
!! ,
µ
0
v
(v) = κ
0
(θ
0 − v),
σ
2
v
(v) =
2
v,
we can write the ode (1) as
0 = 1 − k(v)H(v) + µ
0
v
(v)H0
(v) + 1
2
σ
2
v
(v)H00 (2) (v).
DERIVATIVES: ASSIGNMENT 3, PART B 3
To solve (2) we shall assume that v can take N + 1 equally spaced values: v1, . . . vN+1, which the
coder needs to choose. Define ∆v = v2 − v1. We have thereby discretized the state space of the
stochastic process v.
Define the N + 1 by N + 1 matrix S via
S =
−p1,2 p1,2 0 · · · · · · · · · 0
p2,1 −(p2,1 + p2,3) p2,3 0 · · · · · · 0
0 p3,2 −(p3,2 + p3,4) p3,4 0 · · · 0
.
.
. · · · · · · · · · · · · · · ·
.
.
.
0 · · · 0 pN−1,N−2 −(pN−1,N−2 + pN−1,N ) pN−1,N 0
0 · · · · · · 0 pN,N−1 −(pN,N−1 + pN,N+1) pN,N+1
0 · · · · · · · · · 0 pN+1,N −pN+1,N
,
where
pn,n+1 =
µ
0
v
(vn)
∆v
I{µ0
v
(vn)>0} +
1
2
σ
2
v
(vn)
(∆v)
2
pn,n−1 = −
µ
0
v
(vn)
∆v
I{µ0
v
(vn)<0} +
1
2
σ
2
v
(vn)
(∆v)
2
.
Define the N + 1 by N + 1 matrix K via
K = diag(k(v1), . . . , k(vN+1)),
where
k(vn) = 1
γ
δ +
1 −
1
γ
r +
1
2
γvn
λ
2 − γ
2
2
(1 − ρ
2
)
H(vn+1)−H(vn)
∆v
I{µ0
v
(vn)>0} +
H(vn)−H(vn−1)
∆v
I{µ0
v
(vn)<0}
H(vn)
2
.
Define the 1 by N + 1 vector of ones 1 = (1, . . . , 1)>.
We can find H = (H(v1), . . . , H(vN+1))> via the following recursive procedure
Hk+1 = (I − S∆t)
−1
[1∆t + (I − K∆t)Hk
(3) ].
The above equation starts with Hk and then provides an updated version of H, labelled as Hk+1
.
We start with an initial guess, labelled as H1 and then use the above equation to generate H2
, H3
,
and so on until ||Hk+1 − Hk
|| = maxn∈{1,…,N+1} |Hk+1(vn) − Hk
(vn)| < for some > 0 which
the coder needs to choose.
Use the above iterative procedure to write code that finds the vector H.
(d) Check your code converges for the special case ρ = ±1. You will need to play with ∆t and N.
(e) Bonus Part: If you are familiar with the Method of Variation of Parameters, you can derive an
integral expression for the solution of (2) for the special case ρ = ±1. Use this expression to check
your code.
(f) Make sure your code converges for the general case ρ
2 6= 1. You will need to play around some more
with ∆t and N.
(g) Assume the stochastic discount factor process Λ is given by
dΛt
Λt
= −rdt − λ
√
vtdZt,
where λ > 0 is a constant. Derive an expression for dvt under the risk-neutral measure Q.
(h) Bonus Part: Find a way to calibrate your model. You can use two sets of data: the SP500 and
dividends (you can find monthly data on Robert Shiller’s website) and vanilla options on the SP500.
You can price the options using Heston’s model.
(i) Find the myopic and hedging demand components of the optimal portfolio and plot them as functions
of √
vt
for the parameter values you have chosen (look online!) or calibrated.
(j) Use your results from the previous subquestion to provide wealth and expenditure management
advice for a client with current wealth of 1M GBP.
(k) Bonus Part: Use the reinforcement learning algorithm described in Section 3 of ‘Reinforcement
Learning for Continuous Stochastic Control Problems’ by Munos and Bourgine (on the Hub with
the assignment) to solve for the optimal policies without assuming a particular stochastic process
for v. Compare the optimal policies with those you found in the previous subquestion.
4 HARJOAT S. BHAMRA
Additional Notes
From the ode and the Feynman-Kac Theorem, it follows that
(4) H(vt) = E
P
0
t
Z ∞
t
e
−
R u
t
ksdsdu
,
where P
0
is some probability measure we need to determine. Observe that
dvt = µ
0
v
(vt)dt + σv(vt)dZP
0
v,t,
where ZP
0
v
is a standard Brownian motion under P. We also know that
dvt = µv(vt)dt + σv(vt)dZv,t.
From Girsanov’s Theorem
µ
0
v
(vt)dt = µv(vt)dt + Et
dM0
t
M0
t
dvt
,
where M0
is an exponential martingale under P, which we shall now identify. From the above equation,
we obtain
κ
0
(θ
0 − vt)dt = κ(θ − vt)dt + Et
dM0
t
M0
t
dvt
.
Hence,
Et
dM0
t
M0
t
dvt
= −(κ
0 − κ)vtdt,
because κ
0θ
0 = κθ. Therefore,
Et
dM0
t
M0
t
dvt
=
(γ − 1)λρ
γ
vtdt.
Therefore, one possibility for M0
is
dM0
t
M0
t
= −
(γ − 1)λ
γ
√
vtdZt.
We can use M0
to define P
0 via
E
P
0
t
[IA] = E
P
t
M0
T
M0
t
IA
,
where A is an event realized at date-T.
We shall derive the recursive scheme for finding H via (4). Note that (4) implies
H(vt) = dt + e
−ktdtE
P
0
t
[H(vt+dt)]
= E
P
0
t
(5) [dt + (1 − ktdt)H(vt+dt)].
We now discretize the state space for the stochastic process v. This means using a continuous time
Markov chain under P
0
to approximate v under P
0
. We assume the Markov chain can take values in the
set {v1, . . . , vN+1}. The Markov chain which approximates v has transition matrix S, where pn,n+1dt
the probability under P
0 of the Markov chain taking the value vn at the end of the interval [t, t + dt),
conditional on the current value being vn. Hence, we obtain the discretized state-space version of (5):
Hi =
NX
+1
j=1
[e
Sdt]ij [dt + (1 − kidt)Hj ],
where Hi = H(vi), ki = k(vi) and [e
Sdt]ij is the probability under P
0 of transitioning from state i (where
the chain takes value vi) to state j (where the chain takes value vj ) over the interval [t, t + dt).
In vector-matrix form, we have
H = e
Sdt[1dt + (I − Kdt)]H
Observe that e
Sdt[1dt + (I − Kdt)] is an operator, but is not a linear operator, because K depends
on H. The above equation tells us that H is a fixed point of this nonlinear operator.
Now observe that e
Sdt = (e−Sdt)−1
. Hence, when we discretize time, we obtain In vector-matrix
form, we have
H = (e
−S∆t
)
−1
[1∆t + (I − K∆t)]H.
DERIVATIVES: ASSIGNMENT 3, PART B 5
Using the expansion e−S∆t = I +
P
n=1(−)
n S
n(∆t)
n
n!
, we obtain
H = (I − S∆t)
−1
[1∆t + (I − K∆t)]H + o(∆t).
We can start with a guess for H and thereby obtain the recursive scheme (3).

EasyDue™ 支持PayPal, AliPay, WechatPay, Taobao等各种付款方式!
E-mail: easydue@outlook.com 微信:easydue
EasyDue™是一个服务全球中国留学生的专业代写公司
专注提供稳定可靠的北美、澳洲、英国代写服务
专注提供CS、统计、金融、经济、数学等覆盖100+专业的作业代写服务