Very minor cosmetic update.
This combines the changes in GitHub releases 3.11.7 and 3.11.6.
This release is focused on moving the MARSS User Guide and derivation
files into a inst/userguide
and
inst/derivations
along with Makefiles to build. This will
facilitate updating MARSS and converting the User Guide to an eBook.
pkgdown
folder.Suggests:
that were associated with
the User Guide and tests and not used in examples.Quick_Start.Rnw
to Rmd so it works better
with pkgdown and renders to html for CRAN.Additional_repositories: https://atsa-es.r-universe.dev
predict_marssMLE()
, the newdata model.tsp was not
being set.plot.marssMLE()
, the CIs for the state residuals did
not plot correctly because there is always a NA at the end.vignettes/Learning_MARSS.Rmd
MARSS.dfa()
to the manual built by CRAN by
removing keyword internal.inst/DEVELOPER_NOTES.md
EMDerivation.Rnw
in
slant font as they are random variables.all.equal.vector
to
vector.all.equal()
so it is not interpreted as a method of
all.equal()
.This release is focused on adding new method
method="TMB"
which uses the package {marssTMB}.
MARSS()
to recognize these.MARSSfit()
and methods for
“kem” and “BFGS”.R/onLoad.R
and made it easier to
specify new methods and constraints on methodscheckMARSSinputs.R
,
is_marssMLE.R
and is.validvarcov()
so they are
not so specific to method="BFGS"
but work for any methods
with similar constraints.MARSSvectorizeparam()
. Need when writing methods
for MARSSfit()
generic.optim()
was using number of
function calls not number of gradient calls.toLatex()
to output the raw tex so that it can
directly go into RMarkdown or Quarto.match.arg()
for form and method.Ignore. Pre-emptive.
This release is focused on improving the plotting functions for marssMLE, marssResiduals and marssPredict objects. The website links also needed to be updated to the new GitHub organization home for MARSS (and the other ATSA material): atsa-es.
autoplot.marssMLE
and
plot.marssMLE
: all types of residuals with all possible
standardizations versus time, plus ACF and QQPlots for the same, and all
possible fitted y and x plots. Cleaned-up the plots in various ways
(e.g. missing CIs). Added notes (that can be turned off) to the bottom
of the autoplot
plots to explain what the plot is and guide
the user to the more standard plots.autoplot.marssMLE()
and plot.marssMLE()
to allow a full range of residuals
plots but to only show a subset for a specific set of residuals
diagnostics plots by default.autoplot.marssResiduals()
and
plot.marssResiduals()
for marssResiduals
objects. Simplifies standard residuals plots. This needs to be separate
from plot.marssMLE()
(i.e. cannot be called from
plot.marssMLE
) since it is designed to plot whatever
happens to be in the marssResiduals
object passed to
plot.marssResiduals()
. plot.marssMLE()
runs
residuals()
to create a specific set of residuals
diagnostics plots.autoplot.marssPredict()
with better titles and
notes below the plots. Removed pi.int
as an argument for
autoplot.marssPredict()
and
plot.marssPredict()
as this was extraneous. The PI/CI info
is pulled in from the marssPredict
object.match.arg.exact()
which does
exact argument matching. The base R match.arg()
uses
pmatch()
and does partial matching. This is a problem for
many functions where "xtt1"
is different than
"xtt"
. This function implements exact matching.coef.marssMLE()
when type="matrix"
.MARSSresiduals()
to
explain variance and correlation for standardized residuals.var.ytt1
and var.Eytt1
to the output
from MARSShatyt()
when only.kem=FALSE
. This is
for convenience for the plot functions.str_to_sentence
utility function for the notes in
autoplot.marssMLE()
plots.interval.type
to the marssPredict objects
otherwise the type of interval in the object (prediction or confidence)
cannot be known.plot.predictMARSS()
was not showing the forecasts and
the state predictions when h=0
was garbling the CIs and PIs
when a short newdata
was passed in (short = shorter than
original data).autoplot.marssPredict()
was not using the time info
from ts object, so x-axis was showing 1, 2, 3 etc instead of the years,
for example.MARSS.dfa()
used if form="dfa"
allowed Z
to be passed in. This form is a helper function that forms a default DFA
model with a user specified number of trends (m). If the user needs a
custom Z, they should not use form="dfa"
but use the
default MARSS()
(form="marxss"
).
MARSS.dfa()
was changed to not allow Z to be passed into
the model argument.coef.marssMLE()
was not properly showing a time-varying
A or U when type="matrix"
, form="marss"
and D
or C estimated.plot.marssMLE()
was not resetting par()
when done thus affecting users plot environment.residuals.marssMLE()
value column for states was wrong
when there were more than one state because
c($.x[2:TT], NA)
was used. Changed to not offsetting either
.x
or .fitted
columns and instead added
clarification to the documentation for
residuals.marssMLE()
. The value
column was
used for examples in help file for residuals.marssMLE()
and
for the coefficient of determination reported by
glance.marssMLE()
.MARSS()
was not setting convergence to an error code
when Kalman filter function throws and error before model is fit.MARSSsettings()
. This was
replaced with pkg_globals
in the package environment via
.onLoad()
.\eqn{}
when R, Q etc refer to the matrices in the
MARSS equation versus code.residuals_marssMLE.Rd
had a few typos. Main one was
that name
column was called .type
column.Residuals.Rnw
. LaTeX definition for
Vtt1
was not working. Also fixed a couple misspellings in
EMDerivation.Rnw
.This is an update based on version 3.11.2 (GitHub release). It is
mainly focused on providing graceful exiting for models that report
errors due to ill-conditioned variance matrices and for models with
fixed parameters. The testing and output (plot
,
residuals
, tsSmooth
, fitted
) was
made less reliant on MARSSkfss()
, which involves an
inversion of Vtt1
and which can become ill-conditioned and
report an error. The update also fixes a bug in the log-likelihood
calculation due to not specifying the tol=0
in
SSModel()
call. This bug would come up only for variance
matrices with extremely high condition numbers fit with
method=BFGS
. Data and covariates can now be a ts object and
the time information will be used for plotting.
MARSSkfss()
calls when trace=-1
.
MARSSkfss()
is used for error checks (because it has
verbose information to indicate model problems) but because it uses
matrix inversions, it will stop models from being fit just because they
cannot be run through MARSSkfss()
even if they run fine
with MARSSkfas()
, which doesn’t use these matrix
inversions.t
column in fitted,
residuals and tsSmooth output.xtt
and Vtt
to
MARSSkfas()
to avoid MARSSkfss()
calls when
unnecessary.MARSS()
was run
with fit=FALSE
.MARSSparamCIs()
when model
is fixed and thus no parameters estimated.VtT
which sometimes happens for MARSSkfas()
.
Give user helpful suggestions for switching the Kalman filter/smoother
function.KFS()
, a tolerance correction affected the
log-likelihood value when R was below square root of machine tolerance
or condition number was very high (if R non-diagonal). This created a
large (incorrect) jump in the log-likelihood. This would be reported
with a warning that the log-likelihood dropped if using the EM
algorithm. Solution was to set the tolerance to 0 in the KFAS model in
MARSSkfas()
. Note this did not happen for all cases of
small R and a warning would have been generated alerting the user to a
problem.MARSSkfas()
did not recognize if H was
time-varying.MARSS()
call was a marssMLE or marssMODEL
object, the tinitx and diffuse elements were not being passed in, only
the parameter matrices.plot.marssMLE()
and autoplot.marssMLE()
.marssMLE$fun.kf
was not always being passed to
MARSShatyt()
so it didn’t necessarily use the function
requested by the user.coef.marssMLE()
did not change what to, say,
par.se
if type
passed in so
coef(fit, what="par.lowCI", type="Z")
returned
coef(fit, type="Z")
.print.marssMLE()
and coef.marssMLE()
would
fail ungracefully if all the parameters were fixed or
MARSS()
was run with fit=FALSE
.xtt
and xtt1
etc.Residuals.Rnw
.kem.plank.4
.tinitx=1
, then Vtt1T[,,1]
does not
exist. Replaced Vtt1T[,,1]
with NA instead of 0 in this
case. Note Vtt1T[,,1]
would never be used in this case as
V10T
is used instead however a value of 0 is not correct.
The value is does not exist so NA is the correct value.This is an update is focused on graceful exiting for models that
report errors due to ill-conditioned variance matrices or for models
with fixed parameters. The testing and output (plots, residuals,
tsSmooth, fitted) was made less reliant on MARSSkfss()
,
which involves an inversion of Vtt1
and which can become
ill-conditioned and report an error. The update also fixes a bug in the
log-likelihood calculation due to not specifying the tol=0
in SSModel()
call. This bug would come up only for variance
matrices with extremely high condition numbers fit with
method=BFGS
. Data and covariates can now be a ts object and
the time information will be used for plotting.
See notes above for version 3.11.3 (CRAN release).
skip_on_cran()
as they are for internal testing. This
directory is not part of the CRAN package but is on the GitHub
site.Version 3.11.1 is focused on addition of the predict
,
forecast
, fitted
and residuals
functions along with plotting functions for the output. Documentation
for these functions along with background literature and the derivation
of the residuals algorithms have been updated. Residuals in state-space
models are complex as there are two processes (observation and state),
three types of conditioning (data to t-1, t or T), and four types of
standardization used in the literature (none, marginal, Cholesky on the
full variance matrix, and Cholesky on only model or state residual
variance). The MARSS package computes all the variants of residuals.
Many of the predict
changes are listed below for 3.10.13
release on GitHub. New chapters illustrating structural equation models
using MARSS versus StructTS
and the KFAS package were
added. The KFAS chapter compares the KFAS residuals functions to the
MARSS residuals functions. The two packages use different algorithms and
different semantics to compute the same residuals.
ldiag()
convenience function added to make list
diagonal matrices. This replaces having to do code like
a <- matrix(list(0),2,2); diag(a) <- list(2,"a")
. Now
you can call ldiag(list(2,"a"))
.accurancy.marssMLE()
and
accuracy.marssPredict()
which returns accuracy metrics
sensu the forecast package.is.unitcircle()
utility function and added tol so
that it does not fail if abs(eigenvalue) is above 1 by machine
tolerance.plot.marssMLE()
and autoplot.marssMLE()
.residuals.marssMLE()
. Got rid of
augment.marssMLE()
and renamed it
residuals.marssMLE()
. The old
residuals.marssMLE()
became MARSSresiduals()
.
There was too much duplication between residuals.marssMLE()
and augment.marssMLE()
and between
augment.marssMLE()
and fitted.marssMLE()
. Also
I want to minimize dependency on other packages and augment
is a class in the broom package. This required changes
to the glance.marssMLE()
, plot.marssMLE()
and
autoplot.marssMLE()
code.tidy.marssMLE
. tsSmooth.marssMLE
now returns the estimates from the Kalman filter or smoother which
tidy.marssMLE
had returned. tidy.marssMLE
only
returns a data frame for the parameter estimates.solve(t(Vtt1[,,1]), B%*%V00)
which should be faster and
seems to have lower numerical error.predict.marssMLE
updated to return ytt1, ytt,
ytt1.MARSSresiduals.tt1
and MARSSresiduals.tt
but
not returned by residuals.marssMLE()
(unless
clean=FALSE
). Only returned by
MARSSresiduals()
.residuals()
in cases where R=0. In v
3.10.12, I introduced a bug into MARSSkfss()
for cases
where R has 0s on diagonal. History: To limit
propagation of numerical errors when R=0, the row/col of Vtt for the
fully determined x need to be set to 0. In v 3.10.11 and earlier, my
algorithm for finding these x was not robust and zero-d out Vtt row/cols
when it should not have if Z was under-determined. This bug (in <
3.10.12) only affected underdetermined models (such as models with a
stochastic trend and AR-1 errors). To fix I added a utility function
fully.spec.x()
. This returns the x that are fully
determined by the data. There was a bug in these corrections which made
MARSSkfss()$xtT
wrong whenever there were 0s on diagonal of
R. This would show up in residuals()
since that was using
MARSSkfss()
(in order to get some output that
MARSSkfas()
doesn’t provide.) The problem was
fully.spec.x()
. It did not recognize when Z.R0 (the Z for
the R=0) was all 0 for an x and thus was not (could not be) fully
specified by the data. Fix was simple check that colSums of Z.R0 was not
all 0.MARSSresiduals.tt1()
was reporting the smoothations
instead of innovations residuals so reporting same model residuals as
MARSSresiduals.tT()
.base::chol()
returns the upper triangle. Thus the lines in
MARSSresiduals.tT()
and MARSSresiduals.tt1()
that applied the standardization need t(chol())
.trace=1
would fail because
MARSSapplynames()
did not recognize that
kf$xtt
and kf$Innov
were a message instead of
a matrix. I had changed the MARSSkfas()
behavior to not
return these due to some questions about the values returned by the KFAS
function.is.validvarcov()
used eigenvalues >= 0 as passing
positive-definite test. Should be strictly positive so > 0.MARSSkfas()
had bug on the line where V0T
was computed when tinitx=0
. It was using *
instead of %*%
for the last J0
multiplication.
It would affect models with a non-zero V0
under certain
B
matrices, such as structural models fit by
StructTS()
.MARSSresiduals.tT()
and
MARSSresiduals.tt1()
but was in the old
residuals.marssMLE()
also. If MLE object had the
kf
element, kf
was not assigned in code since
there was no kf <- MLEobj$kf
line for that case.
Normally MLE objects do not have the kf
element, but it
could be added or is added for some settings of
control$trace
. This caused residuals()
to fail
if trace=2
.MARSS_dfa.Rd
StructTS()
to MARSS()
output.augment()
from documentation and
manuals. Replaced with residuals()
.predict.marssMLE.Rd
(help page) had bug in the
examples. remove Q=Q
from the model list in the first
example.predict()
and
predict.marssMLE()
.fitted.marssMLE
to have column with .x which is
conceptually the same as the y column for observations. It is the
left-side of the x equation (with error term) while fitted is the
right-side without the error term.predict.marssMLE()
when no data passed in.predict.marssMLE()
so that user
can specify x0 if needed.residuals.marssMLE()
. The data frames are still in tibble
form. Removed all reference to tibbles in the documentation.trace = -1
some tests were still being done. I
added a test for trace = -1
to a few more test lines in
MARSS.R
and MARSS_marxss.R
.residuals.marssMLE()
to
return innovations instead of smoothations.fitted.marssMLE
,
residuals.marssMLE
, and tsSmooth.marssMLE
have
a leading “.”.Version 3.10.13 mainly has to do with the predict()
and
forecast()
functions along with plotting and printing
methods.
MARSSkfss()
and MARSSkfas()
Add rownames
to the x elements of the list.MARSSkf()
Added newdata
to allow user to
pass in a new dataset to fit with the fitted model.predict.marssMLE()
Shows the prediction or confidence
intervals for data or states. Forecasts can be done by passing in
h
. newdata
can be passed in also and fitted
model will be used to fit these data and show the intervals. Output is
same form as a tibble (but not a tibble). Returns a list of class
marssPredict
.forecast.marssMLE()
This does the forward forecasting
past the end of the data. Intended to be called by
predict.marssMLE
. I did not write a marssMLE
method for the forecast
generic in the
forecast package since that would require that the
forecast package be required for the MARSS
package.plot.marssPredict()
plot method for the new
marssPredict object. This is designed to look like the
plot.forecast()
function in the forecast
package.print.marssPredict()
print method for marssPredict
objects.Version 3.10.12 update mainly has to do with the tidy()
,
fitted()
and augment()
enhancements which
clarify ytT, xtT and residual intervals for MARSS models. This was a
major update though probably users will not notice much as it only
affects residuals output. A few minor bugs were fixed which caused
errors to be thrown in some rare time-varying cases. One bug that
affected bootstrap confidence intervals was fixed. The documentation got
a major clean-up. The Residuals report has been heavily edited to
improve precision and clarity (with added verbosity). The help files and
automated manual from the help files were cleaned-up and some of the
internal functions moved out of the manual.
MARSSsimulate()
missing values were placed in the wrong
positions in simulated data. This would affect all simulated data with
missing values and thus any function that used MARSSboot()
,
for example bootstrap confidence intervals for a data set with missing
values. Default is to use Hessian so the user would not normally have
encountered this bug and it had little effect on the CIs.fitted.marssMLE()
Fixed bug in fitted.marssMLE for
states when one.step.ahead=TRUE. It was using xtt1[,t-1] instead of
xtt[,t-1]. The former meant it only used data up to t-2.degen.test()
in MARSSkem() was not catching when R or Q
was time-varying (and thus degeneracy should not be allowed). Changed to
test the 3D of model.dims == 1 or not.residuals.marssMLE(..., Harvey=TRUE)
would fail if Q,
B, or G was time-varying because parmat() called with t+1. Changed to
only call parmat() when t<TT. Q, B, and G do not appear in the
recursion when t=TT so parmat() with t=t+1 is never needed.MARSS_dfa()
used form="dfa"
in MARSS.call
list. Just info. Never used.is.design()
.toLatex.marssMODEL()
Fixed some old bugs in
toLatex_marssMODEL.R. Added S3 class declaration in NAMESPACE for
toLatex. fixed equation attribute in MARSS_marxss. G{t} was used instead
of G_{t}. Only affected toLatex_marssMODEL(). Had extra line in
build.mat.tex() that removed last line of matrices. This function was
not exported so users would never have run into these bugs.MARSSkfss()
To limit propagation of numerical errors
when R=0, the row/col of Vtt for the fully determined x (determined from
data) need to be set to 0. My algorithm for finding these x was not
robust and zero-d out Vtt row/cols when it should not have if Z was
under-determined. MARSSkfss() is not used for fitting and only affected
underdetermined models (such as models with a stochastic trend and AR-1
errors). To fix I added a function fully.det.x()
to the
utility functions. This returns the x that are fully determined by the
data. Note, MARSkfss() is the classic Kalman filter/smoother. The MARSS
algorithm does not use this normally. Normally MARSSkfas(), build off
the Koopman et al algorithm which avoids unneeded matrix inverses, is
used. MARSSkfas() uses the Kalman filter/smoother in the KFAS
package.MARSShatyt()
Added ytt, ytt1, Ott, Ott1 to MARSShatyt()
so that tidy.marssMLE() can more easily return the one-step-ahead
predictions for Y(t). Also added var.ytT and var.EytT so you can easily
get the estimates, CI and prediction intervals for missing data. Added
only.kem to MARSShatyt() so that only values conditioned on 1:T as
needed by MARSS kem are returned. This makes the Ey part of a MARSS
object smaller and speeds up MARSShatyt() a little.tidy.marssMLE()
Changed type for tidy() to xtT, ytT and
fitted.ytT. tidy() exclusively gives estimates of things (parameters, X,
Y, fitted Y) conditioned on all the data.fitted.marssMLE()
Added interval=c(“none”,
“confidence”, “prediction”) to fitted() and returns a list with se’s (or
sd’s if prediction) and intervals. Also added conditioning argument to
fitted.marssMLE which gives fitted values with different conditioning.
Changed default output to tibble.augment.marssMLE()
Changed standard errors output for
augment() to .se.fit for std error of fitted y and .sigma to std error
of residuals. This matches what augment.lm outputs.plot.marssMLE()
and autoplot.marssMLE()
.
Added plot.par to plot.marssMLE and autoplot.marssMLE so that the plots
can be customized. Added plot of ytT to both functions. Changed the
residuals plots to use the CIs for the residuals not the loess CIs.MARSSinfo()
Added “AZR0” to MARSSinfo() to give info if
user gets error that A cannot be estimated with R=0. Added more
informative message to MARSSkemcheck() for that case.residuals.marssMLE()
. This is now a helper
function which calls MARSSresiduals.tT()
or
MARSSresiduals.tt1()
. The former is smoothation residuals
and the latter is innovations (one-step-ahead) residuals.?
or help.search.Minor update. Version 3.10.11 has some edits to speed up the code by
minimizing calls to expensive checking functions and fixes a bug in
MARSSharveyobsFI()
that appeared if a parameter was fixed
and time-varying and MARSSparamCIs()
was called.
MARSSkfss()
had a bug “*” was used in place of %*% with
J0. Would never show up unless V0 estimated.MARSSharveyobsFI()
which arose if a parameter
was fixed and time-varying. This caused MARSSparamCIs() to fail if a
parameter was fixed and time-varying.dparmat()
did not return values if it was time-varying
and fixed. Caused tidy() to return error for dlm models.is.validvarcov()
is expensive. minimize calls to it. If
called with a diagonal matrix, it should automatically pass so added a
check to is.validvarcov() to see if matrix is diagonal.is.marssMLE()
is expensive. Replace with a call to
class().autoplot.marssMLE()
function and updated plot
documentation to cover autoplot functions.Minor update to declare S3 objects if user has broom package installed. A few minor changes also made.
is.validvarcov()
is expensive. Minimize use in
MARSSaic, MARSSboot, MARSSoptim, MARSSparamCIs, MARSSsimulate,
MARSS_marxss.is.diagonal()
utility functionplot.marssMLE()
to
autoplot.marssMLE()
since it is ggplot2 basedplot.marssMLE()
that uses only base R
graphicsMARSSkfss()
had bug on V0T line. “*” instead of %*%.
This code is almost never called.MARSSkfss()
had bug. Did not check if G was
time-varying. This code is almost never called.Major update over 3.9. The main changes have to do with with errors in the Hessian matrix whenever the Cholesky of the R or Q matrix was used (when they weren’t diagonal). This affected all the residuals and confidence intervals calculations for non-diagonal R and Q. Hessian for non-diagonal Z was also bad. Version 3.10.8 completely abandons working with the Cholesky transformed variance-covariance matrices for the Hessian calculation. The Cholesky transformation was not necessary for computing the Hessian since the Hessian is computed at the MLEs and localized. Also the default Hessian computation now uses the Harvey et al. analytical algorithm for the Hessian rather than a numerical estimate.
residuals.marssMLE()
MARSSinits_marxss()
function would give error if U,
A, C, or D fixed and user passed in inits. inits ignored in this case so
should not throw error.alldefaults
could be updated by form. A few functions
were neglecting to (re)load alldefaults or to reassign
alldefaults
when updated: is_marssMLE()
,
MARSSinits.marxss()
, MARSSinits()
. The
variables in the pkg_globals
environment should be (and
only be) loaded when needed by a function and only loaded into the
function environment.MARSSkf()
was not passing optional function args to
MARSSkfas()
.MARSSkfss()
miscounting the number of data points when
R=0, V0=0, and tinitx=1. When Ft[,,1]=0 (e.g. when R=0, V0=0, and
tinitx=1), MARSSkfss()
was including the y[1] associated
with Ft[,,1]=0 in the # number of data points. These should be excluded
since they don’t affect x10.MARSSparamCIs()
gave the wrong s.e. for variances and
covariances when method=“hessian”. It also gave the wrong CIs for
variances and covariances when variance-covariance matrix was
non-diagonal. There were a series of issues related to back-transforming
from the Hessian of a Cholesky transformed variance-covariance matrix (
Sigma=chol%\*%t(chol)
).MARSSparamCIs()
, vrs 3.9 was getting the Hessian matrix
numerically using a variance-covariance matrix that had been transformed
with a Cholesky decomposition to ensure it stays positive-definite. The
upper and lower CIs were computed from the s.e.’s. I back-transformed
the Hessian to the original (non-Cholesky transformed) scale the same
way I back transformed a variance-covariance matrix. But the variance of
s^2 is not the var(s)^2, which is what I was doing, essentially. So the
s.e. for R and Q were wrong in all cases. Note, using a Hessian to
estimate CIs for variance-covariance matrices is generally a bad idea
anyhow however.MARSShessian()
in subscripting the d matrix when doing the
Cholesky transformation. Caused NAs for cases with non-diagonal
matrices. However, had the standard error been returned, they would be
wrong for non-diagonal matrices because the elements of the Cholesky
transformed matrices do not correspond one-to-one to the non-transformed
matrices. E.g. the untransformed [2,2]^2 is Cholesky transformed
[1,2]2+[2,2]2. The Hessian used was for the Cholesky
transformation and the curvature of the LL surface for the Cholesky
transformed values is different than the curvature for the untransformed
variance-covariance matrix elements.Fix: I completely abandoned working with the Cholesky-transformed variance-covariance matrices for the Hessian calculation. The Cholesky-transformation was not necessary for computing the Hessian since the Hessian is computed at the MLEs and localized.
Created a new function MARSSharveyobsFI()
which uses
the Harvey (1989) recursion to analytically compute the observed Fisher
Information matrix. This is the Hessian for the untransformed
variance-covariance matrix parameters. So CIs on variances can be
negative since the variance of the MLE is being approximated by a MVN
(which can lead to negative lower CIs).
Harvey1989 is now the default function when method=‘hessian’. In later vrs of MARSS, this is changed to Holmes2014.
The user can also select method=‘hessian’ and hessian.fun=‘fdHess’ or hessian.fun=‘optim’. This will compute the Hessian (of the log-LL function at the MLEs) numerically using these functions. The variance-covariance matrices are NOT Cholesky transformed. These are numerically estimated Hessian matrices of the untransformed variance-covariance matrices.
Added MARSSinfo(26)
which discusses the reason for
NAs in the Hessian.
MARSShatyt()
was setting ytt1 (expected value of y(t)
conditioned on data up to t-1) to y(t), which is incorrect. Expected
value of y(t) conditioned on y up to t-1 is Z xtt1 + aCSEGriskfigure()
panel 2 was wrong when mu>0
(increasing population).CSEGriskfigure()
panel 2 CIs were wrong when CI
method=hessian since Q had not been back transformed (so was using
sqrt(Q)).MARSSkemcheck()
’s test that fixed B is in unit circle
failed when B was time-varying and some fixed and others estimated. Also
when the eigenvalues were complex, it should test the real part
only.coef.marssMLE()
was not stopping when illegal “what:
arg passed in. man page did not say what happens when
type=parameter.MARSS.marxss()
and
MARSS.marss()
to allow G, H, and L passed inMARSSkem()
to specify star lists with G, H,
and L (mathbb(elem) in EM Derivation)MARSSkss()
to use Q*=G Q t(G), R*=H R t(H) and
V0*=L V0 t(L)MARSSmcinits()
and added chapter
on searching over the initial conditions into the User Guide. As the
MARSS models that MARSS() can fit expanded, MARSSmcinits()
was increasing obsolete and it was impossible to come up with good
searching distributions. Because MARSSmcinits()
was
removed, control$MCInits list item was removed also from defaults and
from accepted input.MARSS.marxss()
to allow c and d to be 3D
arrays. This allows one to use inits=fit to set inits and not get a d
(or c) must be 2D error.MARSSinfo(4)
regarding errors about R=0
and x0 not fixed. Added info to the error warnings to direct user to
MARSSinfo().pchol()
and psolve()
functions to
return the Cholesky transformation or inverse (via solve) when there are
0s on the diagonalprint.marssMLE(x, what="par")
returned a vector of
estimated values instead of the list of par. Changed to return the
list.MARSShatyt()
output. Needed
for residuals.marssMLE().is.validvarcov()
so that it returns an
error if the user specifies a structurally illegal variance-covariance
matrix. Added info to MARSSinfo(25).augment
, tidy
and
glance
functions for marssMLElogLik
method for marssMLE objectsfitted
method for marssMLE objects to return Z
xtT + u (model fitted value of y)plot
method with diagnosticsnone. resubmission due to missing file
checkMARSSinputs()
print.marssMLE()
to make sure models are
class marssMODEL.summary.marssMODEL()
to return the list matrix
instead of the marssMODEL passed in. Added tinitx to the returned (and
printed) list.is.blockunconst()
and
is.blockequaltri()
functions. Not really used or useful and
were buggy.MARSSoption()
code to ensure varcov matrices stay
positive-definite.MARSS()
.
MARSSkf()
to return kf (so use what
user requested), but set Innov, Sigma, Kt etc with
MARSSkfss()
.MARSSkem()
. Removed adding of kf and Ey when
trace>0. This happens in MARSS().summary.marssMODEL()
to use marssMODEL
attributes for par.names and model.dims, so it works on non-marss form
marssMODEL objects.MARSShessian()
MARSShessian()
MARSSinfo()
to give the user some code to
convert pre-3.5 marssMLE object to the 3.5+ form.MARSSkfss()
. When Z was not square (number of rows
> number of cols), OmgRVtt was not getting set. OmgRVtt sets Vtt
diagonals (and corresponding cols and row) to zero when R had 0s on the
diagonal.MARSSkfas()
. Was returning $Innov and $Sigma using
$v and $F, but as detailed in the KFS help page (KFAS package), the ones
returned by KFS are not the same as the standard innovations and Sigma
for multivariate data. Now, MARSSkfas()
returns a text
message to use MARSSkfss()
to get these.residuals.marssMLE()
and
MARSSinnovationsboot()
were not running
MARSSkfss()
to get Innov, Kt, and Sigma when R was not
diagonal. Problem occurred after I changed MARSSkfss()
to
return text error instead of NULL for these.Version 3.7 update was required due to new version of KFAS that changed its API.
allow.degen()
bug that would set elements to
zero, leading to non positive definite matrices. Test if Q and R are
diagonal. If not, don’t allow 0s to be set on diagonal since that is
likely to lead to non-positive definite matrices. I could test if the
row/col covariance are 0s but that would be costly.loglog.conv.test()
bug that returned NAs when
logLik > 720 due to exp(LL) call. Changed to exp(LL-mean(LL))Version 3.6 update is mainly concerned with speeding up
MARSS()
for problems with large number of time series (n
> 100) and where many R elements are being estimated
(e.g. R=“diagonal and unequal”). This comes up in dynamic factor
analyses often. The changes also improve speed for small R problems by
about 25%, but speed increase is 10 fold for problems with R matrices
that are 100x100 with 100 estimated R elements.
MARSS.marxss()
to speed up conversion of
“unconstrained” shortcut to a matrix. Only matters if m or n is
big.convert.model.matrix()
. Old version was always
using slow code to deal with * and + in character matrix. This made
formation of the free and fixed matrices very, very slow when the matrix
got big (100x100, say).is.design()
to not use near equality for test
of element==0. This may break MARSS()
since R sometimes
doesn’t maintain “zeroness”.MARSSkem()
code for
working with large matrices. Replaced all crossproducts with
crossprod()
and tcrossprod()
which are
significantly faster for large matrices. This increases speed 2-10 fold
when working with larger matrices. Largest speed increases are when R is
not diagonal and equal.MARSSkfss()
instead of using the very slow is.diagonal()
function
(which is really meant for list matrices)degen.test()
function so that it
sets a flag to TRUE if any variance-covariance matrix diagonals set to
0. If so, do the updates otherwise skip.parmat()
by testing if d and f
matrices are not time-varying. In which case, don’t subset the array,
but rather rest the “dim” attribute. Much, much faster for big d and f
matrices.sub3D()
to make it a bit faster by using
x[,,t] when both nrow and ncol are >1vec()
to make it 3x faster by setting dim attr
instead of using matrix() when matrix is 2DlakeWAplankton
datasets were saved as data.frame.
Changed to matrix.MARSS()
didn’t print out marssMLE object when
convergence=12 (maxit set below min for conv test).Version 3.5 is mainly concerned with formalizing the internal structure of model objects. marssMODEL objects have been formalized with attributes. A form definition along with associated form functions have been defined. This won’t be noticeable to users but makes writing functions that use marssMODEL objects easier and more versatile.
MARSS.dfa()
to allow B and Q setting to
“diagonal and equal” or “diagonal and unequal”MARSS:::predict.marssMLE()
. Further
development will be done before exporting to users.)MARSSvectorizeparam()
and
MARSSapplynames()
from the exported list. The former has
been replaced by coef(marssMLEObj, type=“vector”). The latter is an
internal utility function.MARSSkfas()
to return Innov and Sigma when R is
diagonal. When R is not diagonal, the user is directed to use
MARSSkfss()
since MARSSkfas()
and
MARSSkfss()
do not agree when R is not diagonal (and I
think the error is in KFAS as the Sigma looks off when R not
diagonal).MARSShessian()
to use a Cholesky transformation
on any variances so that the variance covariance matrices stay positive
definiteMARSSparamCIs()
MARSS()
..onLoad()
function.MARSSkfas()
for version of KFAS. API
changes in KFAS 1.0.0, and a line of code was added to use the correct
API if KFAS 0.9.11 versus 1.0.0 is installed. MARSS will work with both
versions of KFAS.MARSSinfo()
MARSSinfo()
lakeWAplanktonRaw
. Month^2 dropped and month not z-scored.
Original raw data for the counts.MARSSboot()
was out of date with newest version of
MARSShessian()
’s returned arguments.is.blockunconst()
bug made it break on certain diagonal
list matricesThis version update is mainly concerned with adding generic functions (coef, residuals, predict), hooking back up KFAS package filters into MARSS functions, and customizing print functions for different model forms.
MARSSkfas()
was changed to work with the new KFAS
version released July 2012. This led to a 10-20 fold decrease in
computation time for method=“BFGS” and 2 fold for method=“kem”.MARSSkf()
changed to MARSSkfss()
;
MARSSkf()
is now a utility function that picks
MARSSkfas()
or MARSSkfss()
based on
MLEobj$fun.kfMARSSkfas()
using the algorithm given on page 321 in
Shumway and Stoffer (2000), Time Series Analysis and Its Applications
(note the 2000 edition not 2006). The algorithm is given in the User
Guide also. The EM Algorithm requires the lag-one covariance smoother
but this is not one of the outputs of the KFS()
function in
the KFAS package.MARSSkfas()
to be
strictly 0 when it is supposed to be (when R has 0 on the diagonals);
KFS()
output has ~0 not actually 0.coef()
method for marssMLE objects. Added $coef
to marssMLE object.parmat()
to be hidden (not exported). Instead
its functionality is through the standard R function for this purpose,
coef()
.residuals()
method for marssMLE objects by
changing MARSSresids()
to
residuals.marssMLE()
.predict()
method for marssMLE objects.toLatex.marssMODEL()
function to create latex and
pdf output of modelsMARSS.dfa()
did not allow user to pass in Z as
matrix.parmat()
. When t was a vector, parmat()
only returned the value at max(t).MARSSkemcheck()
crashed when the test “when u^{0} or
xi^{0} are estimated, B adjacency matrix must be time invariant” was
started.MARSS_marxss()
threw an error when Z was passed in as a
matrix and A=“scaling”describe.marss()
bug caused diagonal matrices with 1
estimated value and fixed values to not be identified as diagonalVersion 3.2 is a minor update to the documentation
MCInit()
from working.Version 3.0 is a major update and clean-up. Besides the clean-up, the
changes were to allow time-varying parameters and a way for the user to
specify linear constraints using an eqn like a+2*b
in the
parameter matrix.
The changes are extensive but are internal and should be largely
invisible to users of MARSS 2.X. The MARSS()
3.0 call is
backwards compatible to 2.9 except that kf.x0 changed to tinitx and
moved from control list to model list. Use of KFAS remains disabled
until I can update to the new version of KFAS. This slows down
method=“BFGS”, but does not affect method=“kem”.
MARSS()
call is retained (for reference).MARSSkf()
and
MARSShatyt()
functions. Takes MLEobj now.parmat()
function. This returns a parameter
matrix when given a MLEobj.MARSS()
looks for a
function called MARSS.”form”, where form is something like “mar1ss” or
“dlm”. This function takes the MARSS inputs (all of them) and transforms
the input into a marssm object ready for the fitting functions as the
function writer wishes. All that the function has to do is to return a
valid marssm object from the model element of the MARSS()
call. This allows me (or anyone else) to use whatever parameter names
they want in the model element. This way the user can use familiar names
for parameters can set some parameters to specific values (like 0). Or
the user could do something totally different with the model element and
just have it be a text string like model=“my.ts.model.1” or
model=“my.ts.model.2”. The only constraint is that the function output a
proper marssm object and that the control, inits, and MCbounds arguments
for MARSS are properly specified.checkMARSSInputs()
and
checkModelList()
. These replaced the functionality of
popWrap.r and checkPopWrap.rMARSS.marxss()
. This is the first
MARSS.form()
function. This is a standardized format for so
that I can add other forms easily.MARSSkf()
so that K (Kalman gain) is 0 when
tinitx=1 and V0=0. Changed MARSSkf()
to allow some of
diagonals of V0 to be 0 and others non zero. Got rid of many of the
OMGs. Added pcholinv()
function to diaghelpers.r which
deals with matrices with 0s on diagonals. This streamlined the filter
code.MARSSkem()
to allow
time-varying parameters. Made changes to MARSSkf()
,
MARSSkfas()
and MARSSsimulate()
to allow
time-varying parameters. See EMDerivation.pdfMARSShessian()
and
MARSSparamCIs()
to allow one to specify the function used
to compute the log-likelihood.MARSShessian()
MARSSsettings()
, MARSS.marxss()
,
is.marssm()
, is.marssMLE()
.MARSSkem()
and MARSShatyt()
to
allow some diag.V0=0 and others not 0, so user can mix stochastic and
fixed initial x states.MARSSkem()
. Removed
the OMGs from MARSSkem()
since no longer needed given the
new pcholinv()
function.parmat
, pcholinv
,
pinv
, few other functionsMARSSkem()
that meant that the maxit-1 kf and
logLik were returned when algorithm stopped due to hitting maxit. Par
was correct.Version 2.9 was a temporary update to deal with a major change to the
API of the KFAS package. Needed to disable use of
MARSSkfas()
until that function was rewritten.
MARSSboot()
so that MLE objects with method=BFGS
can be used; changed the param.gen argument to take “MLE” and “hessian”
instead of “KalmanEM” and “hessian”. Updated MARSSboot.R and
MARSSboot.Rd.MARSSkfas()
until MARSS
can be made compatible with new version of KFAS package. Removed
importFrom(KFAS, kf) and importFrom(KFAS, ks) from the NAMESPACE.
Removed MARSSkfas from the export list in NAMESPACE. Removed KFAS in the
depends line of DESCRIPTION.MARSSaic()
and MARSSparamCIs()
so
that MARSSboot()
call uses param.gen=“MLE”. This fixes the
bug that stopped MLE objects from BFGS calls to fail.Version 2.8 improved default initial conditions functions and fixed bugs in the Shumway and Stoffer Kalman filter/smoother function.
MARSSkf()
when R=0, kf.x0=x10, and V0=0.
The algorithm was not setting x(1) via y(1) in this special case.MARSSinits()
, got rid of the linear regression to
get inits for x0; using instead solution of pi from y(1)=Z*(D*pi+f)+A;
This stops MARSS from complaining about no inits when Z is not a design
matrix. NOTE NB: This means the default initial x0 are different for 2.7
and 2.8, which leads to slightly different answers for
MARSS(dat)
in 2.7 and 2.8. The answers are not really
different, just they started with slightly different initial values so
have slightly different values when the algorithm reaches its
convergence limit.MARSSkemcheck()
to allow lag-p models. I worked
on the derivation of the degenerate models (with 0 on diag of Q) to
better define the needed constraints on B.0 and B.plus sub matrices.
This led to changes in MARSSkemcheck.r so that lag-p models written as
MARSS model are now allowed. There are still problems though in x0
estimation in the EM algorithm when there are zeros on R and B
diagonals, so best to method=``BFGS’’ until I redo the degenerate EM
algorithm.MARSSkf()
function instead
of MARSSkfas. If kf.x0=“x10”, default was to use
MARSSkfas()
function which is much faster, but it doesn’t
like 0s on B diagonal if V0 is 0. So I added the option to force use of
slower MARSSkf()
function using method=“BFGSkf”. Required
adding stuff to MARSSsettings.r and MARSSoptim.r. This is mainly for
debugging since MARSSoptim()
will now check if optim failed
and try using MARSSkf()
if MARSSkfas()
was
used. Added line to output that says which function used for likelihood
calculation; again for debugging.MARSSmcinit()
to improve random B generation.
There is nothing to guarantee that random Bs in mcinit routine will be
within the unit circle, however it is probably a good idea if they are.
Default bounds for B changed to -1,1 and random B matrix rescaled by
dividing by max(abs(Re(eigen(B)))/runif(1) to get the max abs eigenvalue
between 0 and 1. This works unless the user has fixed some B values to
non-zero values. This required change to is_marssMLE.r also to remove
constraint that B bounds be greater than 0.MARSSmcinit()
to allow fixed and shared values
in random Qs and Rs. The random Wishart draw is rescaled based on the
fixed and shared structure in R or Q. As part of this, I cleaned up how
fixed and shared values are specified in the random draws of other
parameters. This change doesn’t change the end effect, but the code is
cleaner.MARSSoptim()
did not allow unconstrained Q or R. The
problem had to do with temporarily resetting the upper triangle of tmp
fixed matrices to 0 when using tmp.par as the Cholesky matrix.MARSSkf()
when there were 0s on diagonal of Q.
The algorithm only worked if B was diagonal. Fix required changes to
Kalman smoother bit of MARSSkf()
. I rewrote the pertinent
section in EMDerivation.pdf.Versions 2.7 and 2.6 focused on misc. bugs.
MARSSPopWrap()
. Some of the
allowable cases for Z and m were missing.MARSSsimulate()
bug. MARSSsimulate()
was broken for multivariate simulation since I forgot that rmvnorm
returns a 1 x p matrix even if the mean is p x 1. Wrapped the rmvnorm
call in a array() to fix the dim setting.Version 2.5 focused on switching model specification to use list matrices.
MARSS()
. Same
functionality is provided via list matricesMARSS()
. Just the
name of the argument was changed to be more intuitiveVersion 2.2 focused on incorporating the KFAS Kalman filter/smoother functions which are faster and more stable.
MARSSkfas()
function.MARSSkfas()
to deal
with 0s on diag of Ft[,,1] so can do R=0show.doc()
with RShowDoc()
(base)Version 2.0 implements changes to allow B and Z estimation and more element sharing in Q and R matrices.
MARSSkem()
algorithm changed to allow B and Z
estimation.MARSSkem()
algorithm changed to allow constrained B and
Z estimation. This was the second main objective of MARSS 2.0. This
allows you to have fixed values or shared values in your B or Z
matrices.MARSSkem()
. There is no iter.V0 control element
anymore.MARSSkem()
changed to a more general way to deal with
missing values. This is described in the EMDerivation.pdf. It doesn’t
affect the user, but allows the code to be expanded to more types of
models much more easily.MARSSmcinit()
. MCMC init function would
crash for anything except the default model.is.design()
function. A design matrix must have
more or equal rows than columns.MARSSkf()
. R
has a flaw in terms of how it behaves when you subscript a matrix and
the new matrix has a dimension length of 1 for one (or more dimensions).
For example, if a=array(0,dim=c(1,2,4)), then a[,,1] is no longer a
matrix but instead is a vector and dim(a[,,1]) is NULL. This can cause
all sorts of mysterious bugs. Sometimes adding drop=FALSE will prevent
this unpleasant behavior. If b=matrix(0,2,2), dim(b[,1,drop=FALSE]) is
c(2,1) while dim(b[,1]) is NULL. drop=FALSE works great with
2-dimensional matrices, but with 3-dimensional matrices it doesn’t work.
If a=array(0,dim=c(1,2,4)), dim(a[,,1,drop=FALSE]) is c(1,2,1) instead
of c(1,2) which is what you want if a[,,1] is what is going to appear in
some matrix operation. This problem came up in the Kt[,,t] %*% innov[,t]
line in MARSSkf. Normally Kt[,,t] is square and a square matrix or a
scalar is returned, but if Kt[,,t] happened to be something like
dim=c(1,3,20) then Kt[,,t] returned a VECTOR of length 3. In this case,
Kt[, , t] %*% innov[, t] crashed the code. I had to use a kluge to force
R to keep the dimensions after subscripting. This bug only occurred in
models where Z is not a design matrix.summary(marssMLE object)
.MARSSoptions()
. This allows you to
change the defaults for the MARSS()
function. See
?MARSSoptions
.MARSSLLprofile()
. This allows you to
plot some basic log-likelihood profiles. See ?MARSSLLprofile.