Returns an estimate of the partial derivative of expression «y» with respect to variable «x». For example
DyDx(X^3, X) → 3
It estimates the result as the ratio of the change in «y» due to a small change delta in «x» -- i.e.
DyDx(Y, X) == (WhatIf(Y, X, X + delta) - X) / delta
X does not affect
Y, perhaps because
Y does not depend on
X, the result is zero.
By default, the small change delta is 1.0e-8. If «x» itself is small -- e.g. around 1.0e-8 -- you may want to modify delta to get a more accurate result. You can set it using the optional parameter «delta», e.g.:
DyDx(Y, X, delta: 1.0e-4)
DyDx(y, x, I)
(requires Analytica 5.0)
Returns an estimate of the gradient of «y» with respect to «x», where «x» is indexed by «I». For each position in «I», this is the partial derivative of «y» with respect to a slice of «x» along «I». You can list multiple indexes, e.g.,
DyDx(y, x, I, J, K), in which case it estimates the partial derivative at every combination of the specified indexes.
Estimating the gradient requires «y» to be re-evaluated once for every combination of indexes.
When «y» has an index that is not shared by «x», the result of
DyDx(y,x,I) is referred to as a Jacobian in mathematics. Hence, this is also the method to use when you need to estimate a Jacobian. There is no need to mention the index of «y» -- it comes along automatically with automatic array abstraction.
DyDx(y, x, I) is equivalent to
When you evaluate
DyDx(y, x) in mid-mode, the mid value for «x» is varied and the mid-value of «y» is evaluated. In sample-mode, the sample of «x» is varied and the sample for «y» is computed. Therefore, when «y» is a statistical function of «x», care must be taken to ensure that the evaluation modes for «x» and «y» correspond. So, for example:
would not produce the expected result. In this case, when evaluating «y» in mid-mode, Kurtosis evaluates its parameter, and thus «x», in sample mode, resulting in a mismatch in computation modes. To get the desired result, you should explicitly use the mid-value of «x»:
Dydx can be used when «x»'s definition contains a probability distribution or other random element, provided that the «x»'s caching method is configured as the default cache-always (which it will be as long as you haven't explicitly gone out of your way to configure it differently).
Preservation of values
By default, DyDx preserves previously computed values throughout the model. In rare situations, if your model consumes nearly all available memory and you'd rather allow previously computed values to be dropped while computing Dydx, you can specify the optional parameter «preserve» to be false, e.g.:
DyDx(y, x, preserve: false)
Of course, when you need to view other results that had been previously cached, they will need to be recomputed again, and probability distributions will need to be re-sampled.
- In Analytica 4.1 and earlier, DyDx(y, x) cannot be used when the definition of «x» contains a probability distribution or other random element. When used in such a case, the result appears random and nonsensical in the sample-mode. When you want to use DyDx on an uncertain quantity «x», separate the random part (the distribution) into a separate parent variable of «x» (so that the particular random sample is cached), and the desired result will be correctly obtained.
- In Analytica 4.1 and earlier, evaluation of DyDx(y, x) causes any previously computed results that are downstream of «x» to become invalidated.
- The «preserve» parameter was added in Analytica 4.2
- The optional repeated index parameter, which estimates the gradient, was added in Analytica 5.0.