# Tutorial videos

## Contents

- 1 General
- 2 Table and Array Topics
- 2.1 The Basics of Analytica Arrays and Indexes
- 2.2 Local Variables
- 2.3 Array Concatenation
- 2.4 Flattening and Unflattening of Arrays
- 2.5 The Aggregate Function
- 2.6 Sorting
- 2.7 Self-Indexes, Lists and Implicit Dimensions
- 2.8 Introduction to DetermTables
- 2.9 Table Splicing
- 2.10 Step Interpolation
- 2.11 SubTables
- 2.12 Edit Table Enhancements in Analytica 4.0

- 3 Modeling Time
- 4 Analytica Language Features
- 4.1 Local Indexes
- 4.2 Handles and Meta-Inference
- 4.3 The Iterate Function
- 4.4 The Reference and Dereference Operators
- 4.5 Writing User-Defined Functions
- 4.6 Custom Distribution Functions
- 4.7 Regular Expressions
- 4.8 Using the Check Attribute to validate inputs and results
- 4.9 The Performance Profiler

- 5 Organizing Models
- 6 Uncertainty & Probability Topics
- 6.1 Gentle Introduction to Modeling Uncertainty: Webinar Series
- 6.1.1 Session 1: Uncertainty and Probability
- 6.1.2 Session 2: Probability Distributions
- 6.1.3 Session 3: Monte Carlo
- 6.1.4 Session 4: Measures of Risk and Utility
- 6.1.5 Session 5: Risk Analysis for Portfolios
- 6.1.6 Session 6: Common Parametric Distributions
- 6.1.7 Session 7: Expert Assessment of Uncertainty
- 6.1.8 Session 8: Hypothesis Testing

- 6.2 Expecting the Unexpected: Coping with surprises in Probabilistic and Scenario Forecasting
- 6.3 Correlated and Multivariate Distributions
- 6.4 Assessment of Probability Distributions
- 6.5 Statistical Functions
- 6.6 Spearman Rank Correlation
- 6.7 Statistical Functions in Analytica 4.0
- 6.8 The Large Sample Library

- 6.1 Gentle Introduction to Modeling Uncertainty: Webinar Series
- 7 Sensitivity Analysis Topics
- 8 Financial Analysis
- 9 Data Analysis Techniques
- 10 Bayesian Techniques
- 11 Presenting Models to Others
- 12 Application Integration Topics
- 13 Optimization
- 14 Vertical Applications and Case Studies
- 15 Graphing
- 16 Scripting
- 17 Analytica User Community
- 18 Licensing or Installation
- 19 See also

## General

### Introduction to Analytica webinar

This is an unedited 75-minute recording of the *Introduction to Analytica webinar* given on 3-Aug-2016. It is the webinar you attend when you Sign up for a Live Webinar on the Lumina Home Page.

**Watch:** Introduction to Analytica Webinar

You can alternatively watch the demo portion of the webinar, showing off the Enterprise model, recorded separately. This 13 minute streamlined recording skips the Power Point slides (What is Analytica, Benefits, Key Features, Applications, Users, Editions, Resources), but comprises the core of the demo. There is audio, but you might need to turn up the volume a bit.

**Watch:** Enterprise Model Demo.mp4

### Analytica Cloud Player (ACP)

Learn how to use the Analytica Cloud Player (ACP) for collaboration and to create web applications for end users, so they can run models without having to download any software:

- Upload a model to instantly to ACP from Analytica using
**Publish to Web...**from the File menu. - Explore and run a model in ACP via a web browser
- Use the ACP style library to modify the user interface to create a web application, including tab-based navigation, embedding graphs and tables in a diagram, extra diagram styles, sizing a window for the web, autocalc, and more.
- Set up ACP Group Accounts for multiple users to share models in project directories.
- Set up Group Account member access as Reviewers, Authors, or Admins.

**Watch:** Analytica Cloud Player (ACP) Webinar

**See also:** Analytica Cloud Player, ACP Style Library

**Presenter:** Max Henrion, CEO of Lumina on 18 Feb 2016

### Expression Assist

Expression Assist makes suggestions of matching variables and functions as you type definitions. It helps novices and experts alike. It can dramatically speed up the task of writing Analytica expressions. It often provide help, saving you from having to consult a reference elsewhere.

**Watch:** New Expression-Assist.wmv.

**Presenter:** Lonnie Chrisman, CTO, Lumina Decision System, on 9 Feb 2012

**See also:** Expression Assist

## Table and Array Topics

### The Basics of Analytica Arrays and Indexes

These two videos introduce the basic concepts of indexes, multi-dimensional arrays, and Intelligent Array Abstraction. These features are what make Analytica such a powerful tool. Mastering them is key to using Analytica effectively. Understanding them means letting go of some preconceptions you may have from use of Excel or other modeling languages.

Part 1: Indexes, 1-D arrays and the Subscript/Slice Operator: Watch at Intro-to-arrays (Part 1).wmv

Part 2: array functions, multi-D arrays, and Array abstraction. Watch at Intro-to-arrays (Part 2).wmv

Presented by Lonnie Chrisman, CTO, Lumina on Jan 10 and 17th, 2008.

Download a model containing the examples created during the webinar from Intro to intelligent arrays.ana, and Plane catching decision with EVIU.ana.

### Local Variables

**Date and Time:** Thursday, 23 July 2009, 10:00-11:00am Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

I'll explain distinctions between different types of local variables that can be used within expressions. These distinctions are of primary interest for people implementing Meta-Inference algorithms, since they have a lot to do with how Handles are treated. Analytica 4.2 introduces some new distinctions to the types of local variables, designed to make the behavior of local variables cleaner and more understandable. One type of local variable is the LocalAlias, in which the local variable identifier serves as an alias to another existing object. In contrast, there is the MetaVar, which may hold a Handle to another object, but does not act as an alias. The only local variable option that existed previously, declared using Var..Do, is a hybrid of these two, which leads to confusion when manipulating handles. Since LocalAlias..Do and MetaVar..Do have very clean semantics, the use of these when writing Meta-Inference algorithm should help to reduce that confusion considerably. Inside a User-Defined Function, parameters are also instances of local variables, and depending on how they are declared, may behave as a MetaVar or LocalAlias, so I'll discuss how these fit into the picture, as well as local indexes and local indexes.

This is appropriate for advanced Analytica modelers.

You can watch a recording of this webinar at Local-Variables.wmv. The analytica file from the webinar is at Local Variables.ana, where I've also implemented the exercises that I had suggested at the end of the webinar, so you can look in the model for the solutions.

### Array Concatenation

**Date and Time:** Thursday, 25 June 2009 10:00am-11:00 Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Array concatenation combines two (or more) arrays by joining them side-by-side, creating an array having all the elements of both arrays. The special case of list-concatenation joins 1-D arrays or lists to create a list of elements that can function as an index. Array concatenation is a basic, and common, form of array manipulation.

The Concat function has been improved in Analytica 4.2, so that array concatenation is quite a bit easier in many cases, and the ConcatRows function is now built-in (formerly it was available as a library function).

I'll take you through examples of array concatenation, including cases that have been simplified with the 4.2 enhancements, to help develop your skills at using Concat and ConcatRows.

This webinar is appropriate for all levels of Analytica modelers.

You can view a recording of this webinar at Array_Concatenation.wmv. The model file created during the webinar is: Array_Concatenation.ana.

### Flattening and Unflattening of Arrays

**Date and Time:** January 31, 2008, 10:00 - 11:00 Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

On occassion you may need to flatten a multi-dimensional array into a 2-D table. The table could be called a *relational representation* of the data. In some circles it is also refered to as a *fact table*. Or, you may need to convert in the other direction -- expanding, or unflattening a relational/fact table into a multi-dimensional array. In Analytica, the MdTable and MdArrayToTable functions are the primary tools for unflattening and flattening. In this session, I'll introduce these functions and how to use them, several examples, and many variations.

The model developed during this talk is at Flattening_and_Unflatting_Arrays.ana. A recording of the webinar can be viewed at Array-Flattening.wmv

### The Aggregate Function

**Date and Time:** Thursday, 2 July 2009, 10:00am - 11:00am Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Aggregation is the process of transforming an array based on a fine-grain index into a smaller array based on a coarser-grain index. For example, you might map a daily cash stream into monthly revenue (i.e., reindexing from days to months).

This has always been a pretty common operation in Analytica models, with a variety of techniques for accomplishing it, but it has just become more convenient with the Aggregate function, new to Analytica 4.2.

In the webinar, I'll be demonstrating the use and generality of the Aggregate function. In the process, it will also be a chance to review a number of other basic intelligent array concepts, including array abstraction, subscripting, re-indexing, etc.

This webinar is appropriate for all levels of Analytica modelers.

A recording of this webinar can be viewed at Aggregate.wmv. The model file created during this webinar is: Aggregate Function.ana.

### Sorting

**Date and Time:** Thursday, 6 Aug 2009, 10:00am-11:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

This webinar will demonstrate the functions in Analytica that are used to sort (i.e. re-order) data -- the functions sortIndex, Rank, and the new to 4.2 Sort. I'll cover the basics of using these functions, including how they interact with indexes, how to apply them to arrays of data, and their use with array abstraction. I'll then introduce several new 4.2 extensions for handling multi-key sorts, descending options, and case insensitivity.

This webinar is appropriate for all levels of Analytica modelers.

A recording of this webinar can be viewed at Sorting.wmv. The model file created during the webinar is at Sorting.ana.

### Self-Indexes, Lists and Implicit Dimensions

**Date and Time:** January 24, 2008, 10:00 - 11:00 Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Every dimension of an Analytica array is associated with an index object. Array Abstraction recognizes when two arrays passed as parametes to an operator or function contain the same indexes. These indexes are more commonly defined by a global index object, i.e., an index object that appears on a diagram as a parallelogram node. However, variable and decision nodes can serve as indexes, and can even have a multi-dimensional value in addition to being an index itself. This is refered to as a self index. If a variable identifier is used in an expression, the context in which it appears always makes it clear whether the identifier is being used as an index, or as a variable with a value. Self-indexes can arise in several ways, which I will cover. In rare cases, when writing an expression, you may need to be aware of whether you intend to use the index value or the context value of a self-indexed variable. I'll discuss these cases, for example in For..Do loops, and the use of the IndexValue function.

In some cases, lists may be used in expressions, and when combined with other results, lists can end up serving as an implicit dimension of an array. An implicit dimension is a bit different from a full-fledged index since it has not name, and hence no way to refer to it in an expression where an index parameter is expected. Yet most built-in Analytica functions can still be employed to operate over an implicit index. When an implicit index reaches the top level of an expression, it is promoted to be a self-index. I will explain and demonstrate these concepts.

The model developed during this talk is at Self-Indexes_Lists_and_Implicit_dimensions.ana. A recording of the webinar can be viewed at Self-Indexes-Implicit-Dims.wmv

### Introduction to DetermTables

**Date and Time:** 18 September 2008, 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

A DetermTable provides an input view like that of an edit table, allowing you to specify values or expressions in each cell for all index combinations; however, unlike a table, the evaluation of a determtable conditionally returns only selected values from the table. It is called a determtable because it acts as a deterministic function of one or more discrete-valued variables. You can conceptualize a determtable as a multi-dimensional generalization of a select-case statement found in many programming languages, or as a value that varies with the path down a decision tree.

DetermTables can be used to encode a table of utilities (or costs) for each outcome in a probabilistic model. In this usage, they combine very naturally with ProbTables (probability tables) for discrete probabilistic models. They are also extremely useful in combination with Choice pulldowns, allowing you to keep lots of data in your model, but using only a selected part of that for your analysis. This leads to Selective Parametric Analysis, which is often an effective way of coping with memory capacity limitation in high dimensional models.

In this talk, I'll introduce the DetermTable, show how you create one and describe the requirements for the table indexes. The actual "selection" of slices occurs in the table indexes. Not all indexes have to be selectors, but I'll explain the difference and how the domain attribute is used to establish the table index, while the value is used to select the slice. When you define the domain of a variable that will serve as a DetermTable index, you have the option of defining the domain as an *index domain*. This can be extremely useful in combination with a DetermTable, so I will cover that feature as well. It is helpful to understand how the functionality a DetermTable can be replicated using two nodes -- the first containing an Edit Table and the second using Subscript. Despite this equivalence, DetermTable can be especially convenient, both because it simplifies things by requiring one less node, but also because an Edit Table can be easily converted into a DetermTable.

You can watch a recording of this webinar at DetermTables.wmv. The examples created while demonstrating the mechanics of DetermTables is saved here: DetermTable intro.ana. Other example models used were the *2-branch party problem.ana* and the *Compression post load calculator.ana*, both distributed in the Example models folder with Analytica, and the Loan policy selection.ana model.

### Table Splicing

**Date and Time**: Thursday, August 14, 2008, 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Edit tables, probability tables and determ tables automatically adjust when their index's values are altered. When new elements are inserted into an index, rows (or columns or slices) are automatically inserted, and when elements are deleted, rows (or columns or slices) are deleted from the tables. This process of adjusting tables is referred to as *splicing*.

Some indexes in Analytica may be computed, so that changes to some input variables could result in dramatic changes to the index value, both in terms of the elements that appear and the order of the elements in the index. This creates a correspondence problem for Analytica -- how do the rows after the change correspond to the rows before the change. Analytica can utilize three different methods for determining the correspondence: associative, positional, or flexible correspondence. I'll discuss what these are and show you how you can control which method is used for each index.

When slices (rows or columns) are inserted in a table, Analytica will usually insert 0 (zero) as the default value for the new cells. It is possible, however, to explicit set a default value, and even to set a different default for each column of the table. Doing so requires some typescripting, but I'll take you through the steps.

Using blank cells as a default value, rather than zero, has some advantages. It becomes quickly apparent which cells need to be filled in after index items are inserted, and Analytica will issue a warning message if blank cells exist that you haven't yet filled in. I'll take you through the steps of enabling blank cells by default.

You can watch a recording of this webinar at Edit-Table-Splicing.wmv. (Note: There is a gap in the recording's audio from 18:43-27:35).

### Step Interpolation

**Date and Time:** Thursday, 8 April 2010 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

The StepInterp function is useful in a number of scenarios, including:

- Discretizing a continuous quantity into a set of finite buckets.
- Looking up a value from a "schedule table" (e.g., tax-rate table, depreciation table)
- Mapping from a date to its fiscal year, when the fiscal year starts on an arbitrary mid-year date.
- Mapping from a cumulated value back to the index element/position.
- Performing a "nearest" or "robust" Subscript or Slice operation.
- Interpolating value for a relationship that changes in discrete steps

In this webinar, I'll demonstrate how to use the StepInterp function on several simple examples.

This webinar is appropriate for beginning Analytica modelers and up.

You can watch a recording of this webinar at: StepInterp.wmv. You can download the model created during this webinar from Step Interp Intro.ana.

### SubTables

**Date and Time:** Thursday, 31 July 2008, 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

The SubTable function allows a subset of another edit table to be edited by the user as a different view. To the user, it appears as if he is editing any other edit table; however, the changes are stored in the original edit table. The rows and columns can be transformed to other dimensions in the Subtable, with different index element orders, based on Subset indexes, and with different number formats.

A recording of this webinar can be viewed at SubTables.wmv. The model file from this webinar is at media:SubTable_webinar.ana.

### Edit Table Enhancements in Analytica 4.0

**Date and Time:** Thursday, Aug 2, 2007 at 10:00 - 11:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

In this webinar, I will demonstrate several new edit table functionalities in Analytica 4.0, including:

- Insert Choice drop-down controls in table cells.
- Splicing tables based on computed indexes.
- Customizing the default cell value(s).
- Blank cells to catch entries that need to be filled in.
- SubTables
- Using different number formats for each column.

This talk is oriented for model builders with Analytica model-building experience.

The Analytica session that existed by the end of the talk is stored in the following model file: "Edit Table Features.ana".

## Modeling Time

### Manipulating Dates in Analytica

**Date and Time:** Thursday, Sept. 13, 2007 at 10:00 - 11:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

In this talk, I'll cover numerous aspects relating to the manipulation of dates in Analytica. I'll introduce the encoding of dates as integers and the date origin preference. I'll review how to configure input variables, edit tables, or even individual columns of edit tables to accept (and parse) dates as input. I'll cover date number format capabilities in depth, including how to create your own custom date formats, understanding how date formats interact with your computer's regional settings, and how to restrict a date format to a single column only. We'll also see how axis scaling in graphs is date-aware.

Next, we'll examine various ways to manipulate dates in Analytica expressions. This includes use of the new and powerful functions MakeDate, DatePart, and DateAdd, and some interesting ways in which these can be used, for example, to define date sequences. Finally, we'll practice our array mastery by aggregating results to and from different date granularities, such aggregating from a month sequence to a years, or interpolating from years to months.

The model file resulting by the end of the session is available here: Manipulating Dates in Analytica.ana.

You can watch a recording of this webinar here: Manipulating Dates.wmv (Windows Media Player required) Unfortunately, this one seems to have recorded poorly -- the video size is too small. If you magnify it in your media player, it does become readable. Sorry -- I don't know why it recorded like this.

### The Dynamic Function

**Date and Time:** Thursday, 12 June 2008, 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

The Dynamic function is used for modeling or simulating changes over time, in which values of variables at time t depend on the values of those variables at earlier time points. Analytica provides a special system index named Time that can be used like any other index, but which also has the additional property that it is used by the Dynamic function for dynamic simulation.

This webinar is a brief introduction to the use of the Dynamic function and to the creation of dynamic models. I'll cover the basic syntax of the Dynamic function, as well as various ways in which you can refer to values at earlier time points within an expression. Dynamic models result in influence diagrams that have directed cycles (i.e., where you can start at a node, follow the arrows forward and return to where you started), called dynamic loops. Similar *cyclic dependencies* are disallowed in non-dynamic influence diagrams.

During the webinar, we'll loop at several simple examples of Dynamic, oriented especially for those of you with little or no experience with using Dynamic in models. I'll provide some helpful hints for keeping things straight when building dynamic models. For the more seasoned modelers, I'll also try to fold in a few more detailed tidbits, such as some explanation about how dynamic loops are evaluated, and how variable identifiers are interpreted somewhat differently from within dynamic loops.

The model developed (extension of Fibonacci's rabbit growth model) can be downloaded here: The Dynamic Function.ana. A recording of the webinar can be viewed at Dynamic-Function.wmv.

### Modeling Markov Processes in Analytica

**Date and Time:** Thursday, Sept. 20, 2007 at 10:00 - 11:00am Pacific Daylight Time

**Presenter:** Matthew Bingham, Principal Economist, Veritas Economic Consulting

**Abstract**

The class of mathematical processes characterized by dynamic dependencies between successive random variables is called Markov chains. The rich behavior and wide applicability of Markov chains make them important in a variety of applied mathematical applications including population and demographics, health outcomes, marketing, genetics, and renewable resources. Analytica’s dynamic modeling capabilities, robust array handling, and flexible uncertainty capabilities support sophisticated Markov modeling. In this webinar, a Markov modeling application is demonstrated. The model develops age-structured population simulations using a Leslie matrix structure and dynamic simulation in Analytica.

A recording of this session can be viewed at: Markov-Processes.wmv (requires Windows Media Player)

An article about the model presented here: AnalyticaMarkovtext.pdf

See also Donor/Presenter Dashboard -- a sample model that implements a continuous-time Markov chain in Analytica's discrete-time dynamic simulation environment.

## Analytica Language Features

### Local Indexes

**Date and Time:** Thursday, Dec. 13, 2007 at 10:00 - 11:00am Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

A local index is an index object created during the evaluation of an expression using either the Index..Do or MetaIndex..Do construction. Local indexes may exist only temporarily, being reclaimed when they are no longer used, or they may live on after the evaluation of the expression has completed, as an index of the result. Some operations require the use of local indexes, or otherwise could not be expressed.

In this talk, I'll introduce simple uses of local indexes, covering how they are declared using Index..Do, with several examples. We'll see how to access a local index using the A.I operator. I'll discuss the distinctions between local indexes and local variables. I'll show how the name of a local index can be computed dynamically, and I'll briefly cover the IndexNames and IndexesOf functions.

The model created during this talk is here: Webinar_Local_Indexes.ana.

You can watch a recording of this webinar at: Local-Indexes.wmv (Requires Windows Media Player)

### Handles and Meta-Inference

**Date and Time:** Thursday, Dec. 6, 2007 at 10:00 - 11:00am Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Meta-inference refers to computations that reason about your model itself, or that actually alter your model. For example, if you were to write an expression that counted how many variables are in your model, you would be reasoning about your model. Other examples of meta inference include changing visual appearance of nodes to communicate some property, re-arranging nodes, finding objects with given properties, or even creating a transformed model based on portion of your model's structure.

The ability to implement meta-inferential algorithms in Analytica has been greatly enhanced in Analytica 4.0. The key to implementation of meta-inference is the manipulation of Handles to objects (formerly refered to as *varTerms*). This webinar will provide a very brief introduction to handles and using them from within expressions. I will assume you are pretty familiar with creating models and writing expressions in Analyica, but I will not assume that have previous seen or used Handles. This topic is oriented towards more advanced Analytica users.

The model used/created during this webinar as at: Handle and MetaInference Webinar.ANA.

You can watch a recording of this webinar at: Handles.wmv (Requires Windows Media Player)

### The Iterate Function

**Date and Time:** Thursday, Nov. 29, 2007 at 10:00 - 11:00am Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

With Iterate, you can create a recurrent loop around a large model, which can be useful for iterating until a convergence condition is reached, for example. For complex iterations, where many variables are being updated at each iteration, requires you to structure your model appropriate, bundling and unbundling values within the single iterative loop. With some work, Iterate can be used to simulate the functionality Dynamic, and thus provides one option when a second Time-like index is needed (although not nearly as convenient as Dynamic).

In this session, we'll explore how Iterate can be used.

Here is the model file developed during the webinar: Iterate Demonstration.ANA

You can watch a recording of this webinar at: Iterate.wmv (Requires Windows Media Player)

### The Reference and Dereference Operators

**Date and Time:** Thursday, Nov. 15, 2007 at 10:00 - 11:00am Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**
The reference operators make it possible to represent complex data structures like trees or non-rectangular arrays, bundle heterogenous data into records, maintain arrays of local indexes, and seize control of array abstraction in a variety of scenarios. Using a reference, an array can be made to look like an atomic element to array abstraction, so that arrays of differing dimensionality can be bundled into a single array without an explosion of dimensions. The flexibilities afforded by references are generally for the advanced modeler or programmer, but once mastered, they come in useful fairly often.

Here is the model used during the webinar: Reference and Dereference Operators.ana. Near the end of the webinar, I encountered a glitch that I was not able to resolve until after the webinar was over. This has been fixed in the attached model. For an explanation of what was occurring, see: Analytica_User_Group/Reference_Webinar_Glitch.

You can watch a recording of this webinar at: Reference-And-Dereference.wmv (Requires Windows Media Player)

### Writing User-Defined Functions

**Date and Time:** Thursday, Sept. 27, 2007 at 10:00 - 11:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

When you need a specialized function that is not already built into Analytica, never fear -- you can create your own User-Defined Function (UDF). Creating UDFs in Analytica is very easy. I'll introduce this convenient capability, and demonstrate how UDFs can be organized into libraries and re-used in other models. I'll also review the libraries of functions that come with Analytica, providing dozens of additional functions.

After this introduction to the basics of UDFs, I'll dive into an in-depth look at Function Parameter Qualifiers. There is a deep richness to function parameter qualifiers, mastery of which can be used to great benefit. One of the main objectives for a UDF author, and certainly a hallmark of good modeling style, should be to ensure that the function fully array abstracts. Although this usually comes for free with simple algorithms, it is sometimes necessary to worry about this explicitly. I will demonstrate how this objective can often be achieved through appropriate function parameter qualification.

Finally, I will cover how to write a custom distribution function, and how to ensure it works with Mid, Sample and Random.

This talk is appropriate for Analytica modelers from beginning through expert level. At least some experience building Analytica models and writing Analytica expressions is assumed.

The model created during this webinar, complete with the UDFs written during that webinar, can be downloaded here: Writing User Defined Functions.ana.

You can watch this webinar here: Writing-UDFs.wmv (Windows Media Player required)

### Custom Distribution Functions

**Date and Time:** Thursday, 24 July 2008, 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Analytica comes with most of the commonly seen distributions built-in, and many additions distribution functions available in the standard libraries. However, in specific application areas, you may encounter distribution types that aren't already provided, or you may wish to create a variation on an existing distribution based on a different set of parameters. In these cases, you can create your own *User-Defined Distribution Function* (UDDF). Once you've created your function, you can utilize it within your model like you would any other distribution function.

User-defined distribution functions are really just instances of User-Defined Functions (UDFs) that behave in certain special ways. This webinar discusses the various functionalities that a user-defined distribution function should exhibit and various related considerations. Most fundamentally, the defining feature of a UDDF is that it returns a median value when evaluated in Mid mode, but a sample indexed by Run when evaluated from Sample mode. This contrasts with non-distribution functions whose behavior does not depend on the Mid/Sample evaluation mode. Custom distributions are most often implemented in terms of existing distributions (which includes Inverse CDF methods for implementing distributions), so that this property is achieved automatically since the existing distributions already have this property. But in less common cases, UDDFs may treat the two evaluation modes differently.

When you create a UDDF, you may also want to ensure that it works with Random() to generate a single random variate, and supports the Over parameter for generating independent distributions. You may also want to create a companion function for computing the density (or probability for discrete distributions) at a point, which may be useful in a number of contexts including, for example, during importance sampling. I'll show you how these features are obtained.

There are several techniques that are often used to implement distribution functions. The two most common, especially in Analytica, are the *Inverse CDF* technique and the *transformation from existing distributions* method. I'll explain and show examples of both of these. The Inverse CDF is particularly convenient in that it supports all sampling methods (Median Latin Hypercube, Random Latin Hypercube, and Monte Carlo).

A recording of this webinar can be viewed at Custom-Distribution-Functions.wmv. The model file created during the webinar is Custom Distribution Functions.ana.

### Regular Expressions

**Date and Time:** Thursday, 9 July 2009, 10:00am - 11:00am Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Analytica 4.2 exposes a new and powerful ability to utilize Perl-Compatible regular expressions for text expression analysis. This feature has particular applicability for parsing application when importing data. Long known as the feature that makes Perl and Python popular as data file processing languages, that same power is now readily available within Analytica's FindInText, SplitText, and TextReplace functions.

This talk will only touch on the regular expression language itself (information on which is readily available elsewhere), but instead focuses on the use of these expressions from the Analytica expressions, especially the extracting of text that matches to subpatterns and finding repeated matches.

One relatively complex example that I plan to work through is the parsing of census population data from datafiles downloaded from the U.S. census web site. The task includes parsing highly variable HTML, as well as multiple CSV files with formatting variations that occur from element to element. These variations, which are typical in many sources of data, demonstrate why the flexibility of regular expressions can be extremely helpful when parsing data files.

Regular expressions themselves are extremely powerful, but when overused, can be very cryptic. So even though it is possible to get carried away with this power, it is good to know how to balance the temptation.

This talk is appropriate for moderate to advanced level modelers.

A recording of this webinar can be watched at Regular-Expressions.wmv. If you are new to regular expressions, I've included a slides on the regular expression patterns that I made use of in this power point show (these were not shown during the webinar). The model file developed during the webinar is Regular expressions.ana.

### Using the Check Attribute to validate inputs and results

** Date and Time:** Thursday, 17 July 2008 10:00 Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

The check attribute provides a way to validate inputs and computed results. When users of your model are entering data, this can provide immediate feedback when they enter values that are out of range or inconsistent. When applied to computed results, it can help catch inconsistencies, which can help reduce error rates and accidental introduction of errors later.

In this talk, I'll demonstrate how to define a check validation for a variable, and how to turn on the check attribute visible so that it is visible in the object window. I'll demonstrate how the failed check alert messages can be customized. And perhaps most interestingly, how the check can be used in edit tables for cell-by-cell validation, so that out-of-range inputs are flagged with a red background, and alert balloons pop-up when out-of-range inputs are entered. Cell-by-cell validation when certain restrictions on the check expression are followed, which I'll discuss.

A recording of this webinar can be viewed at Check-Attribute.wmv (Note: There is audio, but screen is black, for first 50 seconds). The model used during this webinar, with the check attributes inserted, is at Check attribute -- car costs.ana.

### The Performance Profiler

**Date and Time:** October 9, 2008 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

*Requires Analytica Enterprise*

When you have a model that takes a long time to compute, thrashes in virtual memory, or uses up available memory, the Performance Profiler can tell you where your model is spending its time and how much memory is being consumed by each variable to cache results. It is not uncommon to find that even in a very large model, a small number (e.g., 2 to 5) of variables account for the lion's share of time and memory. With this knowledge, you can focus your attention optimizing the definition of those few variables. On several occassions I've achieved more than 100-fold sped up in computation time on large models using this technique.

The Performance Profiler requires with Analytica Enterprise or Optimizer. I'll demonstrate how to use the profiler with some basic discussions of what is does and does not measure. One neat aspect of the profiler is that you can actually activate it after the fact. In otherwords, even though you haven't adding profiling to your model, if you happen to notice something taking a long time, you can add it in to find out where the time was spent.

Using the Profiler is pretty simple, so I expect this session will be somewhat shorter than usual. The content will be oriented primarily to people who are unfamiliar with the profiler, although I will also try to provide some behind the scenes details and can answer questions about it for

You can watch a recording of this webinar at Performance-Profiler.wmv. The model file containing the first few examples from the webinar can be downloaded from Simple Performance Profiler Example.ana.

## Organizing Models

### Modules and Libraries

**Date and Time:** 10 Dec 2009 10:00am Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Modules form the basic organizational principle of an Analytica model, allowing models to be structured hierarchically, keeping things simple at every level even in very large complex models. You can use linked modules to store your model across multiple files. This capability enables reuse of libraries and model logic across different models, and allows you to divide your model into separate pieces so that different people can work concurrently on different pieces of the model.

In this talk, I will review many aspects of modules and libraries. We'll see how to use linked modules effectively. I'll cover what the the distinctions are between Modules, Libraries, Models and Forms. I'll demonstrate various considerations when adding modules to existing models -- such as whether you want to import system variables or merge (update) existing objects, and some variations on what is possible there. We'll see how to change modules (or libraries) from being embedded to linked, or vise versa, and how to change the file location for a linked module. When distributing a model consisting of multiple module files, I'll go over directory structure considerations (the relative placement of module files), and also demonstrate how you can store a copy of your model with everything embedded in a single file for easy distribution.

I'll also discuss definition hiding and browse-only locking. By locking individual modules, you can create libraries with hidden and unchangeable logic that can be used in the context of other people's models, keeping your algorithms hidden. Or, you can distribute individual models that are locked as browse only, even in the context of a larger model where the remainder of the model is editable.

I'll talk about using linked modules in the context of a source control system, which is often of interest for projects where multiple people are modifying the same model. I'll also reveal an esoteric feature, the Sys_PreLoadScript attribute, and how this can be used to implement your own licensing and protection of intellectual property.

This webinar is appropriate for all levels of Analytica model builders.

You can watch a recording of this webinar at Linked-Modules.wmv. The starting model used in the webinar can be downloaded from Loan_policy_selection_start.ana, and then you can follow along to introduce and adjust modules as depicted in the recording if you like.

## Uncertainty & Probability Topics

### Gentle Introduction to Modeling Uncertainty: Webinar Series

**Date and Time:**

**Session 1:**Thursday, 29 Apr 2010 10:00am Pacific Daylight Time**Session 2:**Thursday, 6 May 2010 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Are you someone who has never built a model containing explicit representions of uncertainty? Did that Statistics 1A class you took a long time ago instill a belief that probability distributions are irrelevant to the type stuff you work on? Are you afraid to start representing uncertainty explicitly because you just don't have the statistics background and don't know much about probability and probability distributions?

If any of these sentiments resonate with you, then this webinar (series) may be for you.

These are interactive webinars. Be prepared to answer some questions, and have Analytica fired up in the background. You are going to use it to compute the answer to a couple exercises! Even if you are watching the recording, be ready to complete the exercises.

This webinar series is most appropriate for:

- Beginning Analytica model builders.
- Users of models that present results with uncertainty.
- Accomplished spreadsheet or Analytica model builders who have not previously incorporated uncertainty.
- People looking to learn the basics of probability for the representation of uncertainty.

#### Session 1: Uncertainty and Probability

In the first session discusses different sources and types of uncertainty, probability distributions and how they can be used to represent uncertainty, various different interpretations of probabilities and probability distributions, and reasons why it is valuable to represent uncertainty explicitly in your quantitative models.

A recording of this webinar can be viewed at: Modeling-Uncertainty1.wmv. A copy of the model created by the presenter during the webinar (the scholarship example) can be downloaded from Modeling uncertainty 1 - princeton scholarship.ana. Power point slides can be downloaded from: Modeling Uncertainty 1.ppt.

#### Session 2: Probability Distributions

How do you characterize the amount of uncertainty you have regarding a real-valued quantity? This second session explores this question, and introduces the concepts of average deviation (aka absolute deviation), variance and standard deviation. It then introduces the concept of a *probability distribution* and the Normal and LogNormal distributions. We examine the expected value of including uncertainty and do a few modeling exercises that demonstrate how it can be highly misleading, even expensive, to ignore uncertainty.

A recording of this webinar can be viewed at Prob-Distributions.wmv. The model build during the webinar can be downloaded from Probability Distributions Webinar.ana. Power point slides are at Modeling Uncertainty 2.ppt.

#### Session 3: Monte Carlo

**Date and Time:** Thursday, 13 May 2010 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

In this third webinar in the "Gentle Introduction to Modeling Uncertainty" series, we will see how a probability distribution can be represented as a set of representative samples, and how this leads to a very general method for propagating uncertainty to computed results. This method is known as Monte Carlo simulation.

Analytica represents uncertainty by storing a representative sample, so we'll be learning about how Analytica actually carries out uncertainty analysis. We explore how all the uncertainty result views in Analytica are created from the sample, and learn various 'tricks' for nice histograms for PDF views in various situations.

We'll learn about the Run index, and how this places samples across different variables in correspondence. We'll learn about the generality of the Monte Carlo for propagating uncertainty, and also learn what Latin Hypercube sampling is.

A recording of this webinar can be viewed at Monte-Carlo.wmv. The power point slides are at: Monte Carlo Simulation.ppt. Example models created during the webinar include: Mining Example.ana, Explicit samples.anaand Representing Uncertainty 3 - Misc.ana (product of normals and comparison between Latin Hypercube and Monte Carlo precision).

#### Session 4: Measures of Risk and Utility

**Date and Time:** Thursday, 20 May 2010 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

This fourth webinar in the "Gentle Introduction to Modeling Uncertainty" series will explore concepts and quantitative measures of Risk and Utility. We'll discuss various conceptions and types of risk, and explore topics relevant to model-building that include utility and loss functions, expected value, expected utility, risk neutrality, risk aversion, fractiles and Value-at-risk (VaR).

A recording of this webinar can be viewed at Risk-And-Utility.wmv. The power point slides can be viewed at Measures of Risk and Utility.ppt. There is an interesting modeling exercise and exploration of Expected Shortfall near the end of the power point slides that was not covered during the webinar. The worked out model examples from the webinar, along with a solution to the final example not covered, can be downloaded from Measures of Risk.ana.

#### Session 5: Risk Analysis for Portfolios

**Date and Time:** Thursday, 3 June 2010 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Committing to a single project or investing in a single asset entails a certain amount of risk along with the potential payoff. If you are able to proceed with multiple projects or invest in multiple assets, the degree of risk may be reduced substantially with small impact on potential return. In this fifth webinar in the "Gentle Introduction to Modeling Uncertainty" series, we'll look at modeling portfolios, such as portfolios of investments or portfolios of research and development projects, and the impact this has on risk and return. Portfolio analysis is the basis for practices such as diversification and hedging, and is a key of risk management.

As with other topics in this webinar series, the presentation and discussion is designed for people who are new to the use of these concepts in a model building context.

You can watch a recording of this webinar at: Portfolio-Risk.wmv. The Power Point slides are at Risk Analysis for Portfolios.ppt. These include some exercises at the end (for homework!) not covered during the webinar, including continuous portfolio allocations. The model developed during the webinar, augmented to include answers to additional exercises is at Risk Analysis for Portfolios.ana.

#### Session 6: Common Parametric Distributions

**Date and Time:** Thursday, 10 June 2010 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

During the first five sessions of the *Gentle Introduction to Modeling Uncertainty* webinar series, you have been introduced to three distribution functions: Bernoulli, Normal and LogNormal. In this webinar, we're going to increase this repertoire and learn about other common parametric distributions. I'll discuss situations where specific distributions are particularly convenient or natural for expressing uncertainty about certain types of quantities, and other reasons for why you might prefer one particular distribution type over another. We'll also examine the distinction between discrete and continuous distributions.

As with other topics in this webinar series, the presentation and discussion is designed for people who are new to the use of these concepts in a model building context.

A recording of the webinar can be viewed at Parametric-Distributions.wmv. The power point slides are at Common Parametric Distributions.ppt, and the Analytica model containing the exercises and solutions to exercises not covered during the live recording is at Common-Parametric-Distributions.ana.

#### Session 7: Expert Assessment of Uncertainty

**Date and Time:** Thursday, 24 June 2010 10:00am Pacific Daylight Time

**Presenter:** Max Henrion, Lumina Decision Systems

**Abstract**

For most uncertainty analysis, uncertainties about many key quantities must be assessed by expert judgment. There has been a lot of empirical research on human abilities to express their knowledge and uncertainty in the form of probability distributions. It shows that we are liable to a variety of biases, such as overconfidence and motivational biases. I'll give an introduction to practical methods developed by decision analysts to avoid or minimize these biases. I'll give some examples from recent work in expert elicitation for the Department of Energy on the future performance of renewable energy technologies. I'll also discuss ways to aggregate judgments from different experts.

The session is appropriate for people who are new to this area. This probably includes just about everybody!

This session will draw from Chapters 6 and 7 of "Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis" by M Granger Morgan & Max Henrion, Cambridge University Press, 1992

Note: There is no recording of this webinar.

#### Session 8: Hypothesis Testing

**Date and Time:** Thursday, 15 July 2010 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Hypothesis testing from classical statistics addresses the question of whether the apparent support for a given hypothesis is statistically significant. In the field of classical statistics, this is perhaps the most heavily emphasized application of probability concepts, and the methodology is used (if not required by editors) when publishing results for research studies in nearly every field of empirical study.

To illustrate the basic idea, suppose a journalist selects 10 Americans at random and asks whether they support a moratorium on deep sea drilling. Seven of the 10 respond "yes", so the the next day he publishes his article "The Majority of Americans Support a Moratorium on Deep Sea Drilling". His sample is certainly consistent with this hypothesis, but his conclusion is not credible because with such a small sample, this majority could have easily been a random quirk (sampling error). Hence we would say that the conclusion is not "*statistically significant*". But how big does the sample have to be to achieve statistical significance? Where should we draw the line when determining whether the data's support is statistically significant? These are the types of questions addressed by this area of statistics.

Hypothesis testing is a central topic in every introductory Statistics 1A course, often comprising more than half of the total course syllabus. But most introductory courses emphasize a cookbook approach in favor of a conceptual understanding, apparently in the hope of providing people in non-statistical fields step-by-step recipes to follow when they need to publish results in their own fields. As a result, the methodology is possibly misused more often than it is applied correctly, and published results are commonly misinterpreted.

In this seminar, I intend to emphasize a conceptual understanding of the statistical hypothesis methodology rather than the more traditional textbook methodology. After this webinar, when you read "our hypothesis was confirmed by the data at a p-value=0.02 level", or "the hypothesis was rejected with a p-value of 0.18", you should be able to precisely relay what these statements really do or do not imply. You should understand what a p-value and confidence level really denote -- they do not represent, as many people think, the probability that the hypothesis is true.

We will also, of course, examine how we can carry out computations of significance levels (i.e., p-values) within Analytica. Statistics texts are filled with numerous "standard" hypothesis tests (e.g., t-tests, etc), each based on a specific set of assumptions. In this webinar, we'll dive into this in a more general way, where we get to start with our own set of arbitrary assumptions, leveraging the power of Monte Carlo for computation. This means there are no recipes to remember, you can compute significance levels for any statistical model, even if the same assumptions don't appear in your statistics texts, and most importantly, you'll be left with a more general understanding of the concepts.

As a prerequisite, this webinar will assume little more than the introductory background from the earlier webinars in this "Gentle Introduction to Modeling Uncertainty" series. It is appropriate for people who have never taken a Statistics 1A course, or for the majority of people who have taken that introduction to Statistics but could use a refresher.

You can watch a recording of this webinar at Hypothesis-Testing.wmv. To follow along with the webinar, you'll want to also download the Analytica model file Hypothesis Test S&P Volatility.ana before staring. You'll use the data in that model for the various exercises during the webinar.

Solutions to exercises are saved in this version of the model (created during the webinar): Hypothesis Test S&P Volatility solution.ana. I also inserted a solution to the Parkinson's data test that wasn't covered in the webinar but is contained in the Power Point slides.

### Expecting the Unexpected: Coping with surprises in Probabilistic and Scenario Forecasting

**Date and Time:** Thursday, 7 April 2011, 10:00am Pacific Daylight Time

**Presenter:** Max Henrion, Ph.D., Lumina Decision Systems

**Abstract**

The notion of "Black Swans", reinforced by the financial debacles of 2008, confirms decades of research on expert judgment and centuries of anecdotes about the perils of prediction: Our forecasts are consistently overconfident and we are too often surprised. Henrion will explain why forecasters, risk analysts, and R&D portfolio managers should embrace the inevitable uncertainties using scenarios or probability distributions. He will describe a range of practical methods including:

- The value of knowing how little you know — why and when to treat uncertainty explicitly
- Elicitation of expert judgment and how to minimize cognitive biases
- Using Monte Carlo for probabilistic forecasting and risk analysis.
- Calibrating probabilistic forecasts against the historical distributions of forecast errors and surprises.
- Brainstorming to identify "Gray Swans" — surprises that are foreseeable, but ignored in conventional forecasting.

Participants will come away with a deeper understanding of when and how to apply these methods.

You can download the PowerPoint slides used for the webinar: Expecting the Unexpected.pptx (note: If your browser changes this into *.zip when downloading, save it and rename to "Expecting the Unexpected.pptx" before you try to open it). You can watch a recording of the webinar from ExpectingTheUnexpected.wmv.

**Date and Time:** Thursday, March 13, 2008 10:00 Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

This talk will discuss various techniques within Analytica for defining probability distributions with specified marginal distributions, and also being correlated with other uncertain variables. Techniques include the use of conditional and hierarchical distributions, multivariate distributions, and Iman-Conover rank-correlated distributions.

The model created during session talk is Correlated distributions.ana. You can watch a recording of the webinar from Correlated-Distributions.wmv.

### Assessment of Probability Distributions

**Date and Time:** March 6, 2008 10:00am Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

** Abstract**

When building a quantitative model, we usually need to come up with estimates for many of the parameters and input variables that we use in the model. Because these are estimates, it is good idea to encode these as probability distributions, so that our degree of *subjective uncertainty* is explicit in the model. The process of encoding a distribution to reflect the level of knowledge that you (or the experts you work with) have about the true value of the quantity is referred to as *probability (or uncertainty) assessment* or *probability elicitation*.

This webinar will be a highly interactive one, where all attendees are expected to participate in a series of uncertainty assessments as we explore the effects of cognitive biases (such as over-confidence and anchoring), understand what it means to be *well-calibrated*, and utilize scoring metrics to measure your own degree of calibration. These exercises can help you improve the quality of your distribution assessments, and serve as tools that can help you to when eliciting estimates of uncertainty from other domain experts.

The Analytica model Probability assessment.ana contains a game of sorts that takes you through several probability assessments and scores your responses. Participants of the webinar played this game by running this model, if you are going to watch the webinar, you will want to do the same. You may want to wait until the appropriate point in the webinar (after preliminary stuff has been covered) before starting. You can watch the webinar recording here: Probability-Assessment.wmv. The power point slides from the talk are here: Assessment_of_distributions.ppt.

### Statistical Functions

**Date and Time:** Thursday, 21 Aug 2008, 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

*This topic was presented in Aug 2007, but not recorded at that time.*

A statistical function is a function that process a data set containing many sample points, computing a "statistic" that summarizes the data. Simple examples are Mean and Variance, but more complex examples may return matrices or tables. In this talk, I'll review statistical functions that are built into Analytica. I'll describe several built-in statistical functions such as Mean, SDeviation, GetFract, Pdf, Cdf, and Covariance. I'll demonstrate how all built-in statistical functions can be applied to historical data sets over an arbitrary index, as well as to uncertain samples (the Run index). I'll discuss how the domain attribute should be utilized to indicate that numeric-valued data is discrete (such as integer counts, for example), and how various statistical functions (e.g., Frequency, GetFract, Pdf, Cdf, etc) make use of this information. In the process, I'll demonstrate numerous examples using these functions, such things as inferring sample covariance or correlation matricies from data, quickly histogramming arbitrary data and using the coordinate index setting to plot it, or using a weighted Frequency for rapid aggregation.

In addition, all built-in statistical functions can compute weighted statistics, where each point is assigned a different weight. I'll briefly touch on this feature as a segue into next week's topic, Importance Sampling.

This talk can be viewed at Statistical-Functions.wmv. The model built during this talk is available for download at Intro to Statistical Functions.ana.

### Spearman Rank Correlation

**Date and Time:** Thursday, 25 March 2010 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Many measures for quantifying the degree of statistical dependence between quantities are used in statistics. THe two most commonly used are Pearson's Linear Correlation and Spearman's Rank Correlation, computed respectively in Analytica by the functions Correlation and RankCorrel. Pearson's Correlation, which is what people usually mean when they just use the term "Correlation", is a measure of how linear the relationship between two variables is. Spearman's Rank Correlation is a measure of monotonic the relationship between two variables is.

This talk provides an introduction to the concept of rank correlation, how it is distinguished from standard Pearson correlation, and what it measures. There are several notable and rather diverse uses of RankCorrel, which include these (and probably many others):

- A quantitative measure of the degree to which two variables are monotonically related. (E.g., the degree to which an increase in one leads to an increase, or decrease, in the other).
- Testing (from measurements) whether two factors are statistically dependent
- Importance analysis: Determining how much the uncertain of an input contributes to the uncertainty of an output.
- Sampling from joint distributions with arbitrary marginals and specified rank-correlations (Correlate_With and Correlate_Dists)

I will focus mostly on the first two factors in this talk (previous webinars on Sensitivity Analysis have covered the Importance Analysis usage to some extent, and a previous webinar on Correlated and Multivariate Distributions has covered the last point).

Standard hypothesis tests exist for determining whether two factors are statistically dependent by testing the hypothesis that their rank correlation is non-zero (null hypothesis that it is zero). When the P-value of these tests is less than 5% (or 1%), you would be justified in concluding that the two variables are statistically dependent. I will demonstrate how to compute this P-value.

Then I will introduce a new analysis of rank correlation that I came up with, which I think is novel and potentially pretty useful, somewhat related to the classical hypothesis tests just mentioned. Suppose you gather a small sample of data on two variables in a study and you want to determine how strong the monotonicity between the two variables is. You can compute the *sample rank correlation* for the data set, but this is only an estimate since you have a small sample size and thus sampling error may throw off this estimate. So suppose we imagine there is some "true" underlying rank correlation between the variables (this in itself is a new concept, which I will make precise). From your data set, you have some knowledge about the true value of this underlying rank correlation -- the larger your sample size, the more precise your knowledge is. The new technique I describe here computes a (posterior) distribution over the true underlying rank correlation, from which you can express your rank correlation result as a range (such as rc=0.6±0.2), and answer questions such as what is the probability that the underlying rank correlation is between -0.1 and 0.1, P(-0.1 < rc < 0.1), or P(rc>0), etc. Although this is essentially a posterior distribution, there is no prior distribution involved or needed to computate it, so it is simply a function of the measured data and of the sample size. It really is a probability distribution on the underlying rank correlation, not just a P-value, making it much more useful.

This new analysis is also useful for quantifying the probability that two factors are independent in a manner not possible with the classical tests. The classical P-value of the aforementioned tests measure the probability of a Type II error for the hypothesis that variables are dependent. These tests do not provide the probability of a Type I error, which would be the criteria for concluding that a claim of statistical independence is statistically signficant. This new measure, however, can justifiably be used for quantifying a claim of statistical independence since it allows P(-c<rc<c) to be computed for any c.

I will demonstrate how this new analysis of rank correlation works and is encoded within Analytica, and show how to read off the interesting results.

A recording of this webinar can be viewed at: Rank-Correlation.wmv. The model files created during the talk are available at: Rank-Correlation-Examples.ana and Rank-Correlation-Analysis.ana.

### Statistical Functions in Analytica 4.0

**Date and Time:** Thursday, Aug 16, 2007 at 10:00 - 11:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

A statistical function is a function that process a data set containing many sample points, computing a "statistic" that summarizes the data. Simple examples are Mean and Variance, but more complex examples may return matrices or tables. In this talk, I'll review statistical functions that are built into Analytica 4.0. In Analytica 4.0, all built-in statistical functions can now be applied to historical data sets over an arbitrary index, as well as to uncertain samples (the Run index), eliminating the need for separate function libraries. I will demonstrate this use, as well as several new statistical functions, e.g., Pdf, Cdf, Covariance. I will explain how the domain attribute should be utilized to indicate that numeric-valued data is discrete (such as integer counts, for example), and how various statistical functions (e.g., Frequency, GetFract, Pdf, Cdf, etc) make use of this information. In the process, I'll demonstrate numerous examples using these functions, such things as inferring sample covariance or correlation matricies from data, quickly histogramming arbitrary data and using the coordinate index setting to plot it, or using a weighted Frequency for rapid aggregation.

In addition, all statistical functions in Analytica 4.0 can compute weighted statistics, where each point is assigned a different weight. I'll cover the basics of sample weighting, and demonstrate some simple examples of using this for computing a Bayesian posterior and for importance sampling from an extreme distribution.

The Analytica model file that had resulted by the end of the presentation can be downloaded here: User Group Webinar - Statistical Functions.ANA.

### The Large Sample Library

**Date and Time:** Thursday, 18 Feb 2010 10:00am Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

The Large Sample Library is an Analytica library that lets you run a Monte Carlo simulation for a large model or a large sample size that might otherwise exhaust computer memory, including virtual memory. It breaks up a large sample into a series of batch samples, each small enough to run in memory. For selected variables, known as the *Large Sample Variables* or *LSVs*, it accumulates the batches into a large sample. You can then view the probability distributions for each LSV using the standard methods — confidence bands, PDF, CDF, etc. — with the full precision of the large sample.

Memory is saved by not storing results for non-LSVs.

This presentation introduces this library and how to use it.

You can watch a recording of this webinar at Large-Sample-Library.wmv. The Large Sample library can be downloaded for use in your own models from the Large Sample Library: User Guide page. The two example models used during this webinar were: Enterprise model3.ana and Simple example for Large Sample Library.ana.

## Sensitivity Analysis Topics

### Tornado Charts

**Time and Date:** Thursday, 20 Mar 2008 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract:**

A tornado chart depicts the result of a **local sensitivity analysis**, showing how much a computed result would change if each input were varied one input at a time, with all other inputs held to their baseline value. The result is usually plotted with horizontal bars, sorted with larger bars on top, resulting in a graph resembling the shape of a tornado, hence the name. There a numerous variations on tornado charts, resulting from different ways of varying the inputs, and in some cases, different metrics graphed.

This talk will walk through the steps of setting up a Tornado chart, and explore different variations of varying inputs. We'll also explore some more complex issues that can arise when some inputs are arrays.

The model used during this talk is here: Tornado Charts.ana (the stuff for the talk was in the Tornado Analysis module). You can watch a recording of this webinar from Tornado-Charts.wmv.

### Advanced Tornado Charts -- when inputs are Array-Valued

**Date and Time:** Thursday, April 17, 2008 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

The webinar of 20-Mar-2008 (Tornado-Charts.wmv, see webinar archives) went through the fundamentals of setting up a local sensitivity analysis and plotting the results in the form of a tornado chart. That webinar also discussed the many variations of tornado analyses (or more generally, local sensitivity analyses) that are possible.

This talk builds on those foundations by going a step further and addressing tornado analyses when some of the input variables are array-valued. The presence of array-valued inputs introduces many additional possible variations of analyses, as well as many modeling complications. For example, a local sensitivity analysis varies one input at a time, but that could mean you vary each input variable (as a whole) at a time, or it could mean that you vary each cell of each input array individually. Either is possible, each resulting in a different analysis. Some of these variations compute the correct result automatically through the magic of array abstraction, once you've set up the basic tornado analysis that we covered in the first talk, while other require quite a bit of additional modeling effort. However, even the ones that produce the correct result can often be made more efficient, particularly when the indexes of each input variable are different across input variables.

When we do opt to vary input arrays one cell at a time, the display of the results may be dramatically effected. Although we can keep the results in an array form, the customary tornado chart require us to *flatten* the multi-D arrays and label each bar on the chart with a cell coordinate.

A recording of this webinar can be viewed at Tornados-With-Arrays.wmv. This webinar made use of the following models: Sales Effectiveness Model with tornado.ana, Biotech R&D Portfolio with Tornado.ana, Sensitivity Analysis Library.ana, and Sensitivity Functions Examples.ana. See The Sensitivity Analysis Library for more information on how to use Sensitivity Analysis Library.ana in your own models.

## Financial Analysis

### Internal Rate of Return (IRR) and Modified Internal Rate of Return (MIRR)

**Date and Time:** 18 Dec 2008, 10:00am Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

This is Part 3 of a multi-part webinar series where we have been covering the modeling and evaluation of cash flows over time in an interactive exercise-based webinar format, where concepts are introduced in the form of modeling exercises, and participants are asked to complete the exercises in Analytica during the webinar. Part 3 covers Internal Rate of Return (IRR) and Modified Internal Rate of Return (MIRR), and includes seven modeling exercises.

To speed the presentation up, I am providing the exercises in advance: NPV_and_IRR.ppt. I urge you to take a shot at completing them before the webinar begins, and we'll advance through the exercises more rapidly so as to complete the topic material within the hour. By attempting the exercises in advance, you'll have a good opportunity to compare your solutions to mine, and to ask questions about things you got stuck on.

A dollar received today is not worth the same as a dollar received next year. Taking this time-value of money (or more generally, time-value of utility) into account is very important when comparing cash flows over time that result from long-term capital budgeting decisions. Net Present Value (NPV) and Internal Rate of Return (IRR) are the two most commonly used metrics examining the effective value of an investment's cash flow over time. Both concepts are pervasive in decision-analytic models.

This webinar will be highly interactive. Fire up a instance of Analtyica as you join on. As I introduce each concept, I'll provide you with cash flow scenarios, and give you a chance to compute the result yourself using Analytica. This talk is intended for people who are not already well-versed in NPV and IRR, or for people who already have a good background with those concepts but are new to Analytica and thus can learn from the interactive practice of addressing these exercises during the talk.

See also the materials from Parts 1 and 2 (Net Present Value, 20 Nov 2008 and 4 Dec 2008) elsewhere on this page. This session begins with the model Cash Flow Metrics 2.ana, and ends with Cash Flow Metrics 3.ana. You can watch a recording of this webinar at IRR.wmv.

### Bond Portfolio Analysis

**Date and Time: ** 11 Dec 2008, 10:00am Pacific Standard Time

**Presenter:** Rob Brown, Incite! Decision Technologies

**Abstract**

I demonstrate how to value a bond portfolio in which bonds are bought and sold on an uncertain frequency. The demonstration shows how Intelligent Arrays and related functions can greatly simplify calculations of multiple dimensions that would typically require multiple interconnected sheets in a spreadsheet or nested do-loops in a procedural language.

You can watch a recording of this webinar at Bond-Portfolio-Analysis.wmv. The model underlying the presentation is Bond Portfolio Valuation.ana, and the power point slides are at Bond Portfolio Valuation.ppt.

### Net Present Value (NPV)

**Date and Time:** Part I : Thursday, 20 Nov 2008, 10:00am Pacific Standard Time

- Part II : Thursday, 4 Dec 2008, 10:00am Pacific Standard Time

(Parts 1 & 2 cover NPV -- part 3, listed now separately, covers IRR)

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

A dollar received today is not worth the same as a dollar received next year. Taking this time-value of money (or more generally, time-value of utility) into account is very important when comparing cash flows over time that result from long-term capital budgeting decisions. Net Present Value (NPV) and Internal Rate of Return (IRR) are the two most commonly used metrics examining the effective value of an investment's cash flow over time. Both concepts are pervasive in decision-analytic models.

This multi-part webinar provides an introduction to the concepts of present value, discount rate, NPV and IRR. We'll discuss the interpretation of *discount rate*, and we'll get practice computing these metrics in Analytica. We'll examine the pitfalls of each metric, and we'll examine the interplay of each metric with explicitly modelled uncertainty (including the concepts of Expected NPV (ENPV) and Expected IRR (EIRR)).

This webinar will be highly interactive. Fire up a instance of Analtyica as you join on. As I introduce each concept, I'll provide you with cash flow scenarios, and give you a chance to compute the result yourself using Analytica. This talk is intended for people who are not already well-versed in NPV and IRR, or for people who already have a good background with those concepts but are new to Analytica and thus can learn from the interactive practice of addressing these exercises during the talk.

I have assembled quite a bit of material, which I believe will fill two webinar sessions. Part 1 will focus mostly on present value, NPV, discount rate, and the use of NPV with uncertainty. Part 2 will focus mostly on IRR, several "gotchas" with IRR, and MIRR.

Materials:

- Cash Flow Metrics 2.ana : Model at end of second session
- Cash Flow Metrics 1.ana : Model at end of first session
- NPV-and-IRR1.wmv : Webinar recording of Part 1.
- NPV-and-IRR2.wmv : Webinar recording of Part 2.

Note: Part 1 covered 5 exercises, covering present value, discount rate, modeling certain cash flows, computing NPV, and graphing the NPV curve. Part 2 added exercises 6-9, covering cash flows at non-uniformly-spaced time periods, valuating bonds and treasury notes, cash flows with uncertainty, and using the CAPM to find invester-implied corporate discount rate.

The "class" will continue with Part 3 beginning with Internal Rate of Return.

## Data Analysis Techniques

### Statistical Functions

**Date and Time:** Thursday, May 22, 2008 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

A statistical function is a function that processes a data set containing many sample points, computing a "statistic" that summarizes the data. Simple examples are Mean and Variance, but more complex examples may return matrices or tables. In this talk, I'll review statistical functions that are built into Analytica 4.0. In Analytica 4.0, all built-in statistical functions can now be applied to historical data sets over an arbitrary index, as well as to uncertain samples (the Run index), eliminating the need for separate function libraries. I will demonstrate this use, as well as several new statistical functions, e.g., Pdf, Cdf, Covariance. I will explain how the domain attribute should be utilized to indicate that numeric-valued data is discrete (such as integer counts, for example), and how various statistical functions (e.g., Frequency, GetFract, Pdf, Cdf, etc) make use of this information. In the process, I'll demonstrate numerous examples using these functions, such things as inferring sample covariance or correlation matricies from data, quickly histogramming arbitrary data and using the coordinate index setting to plot it, or using a weighted Frequency for rapid aggregation.

In addition, all statistical functions in Analytica 4.0 can compute weighted statistics, where each point is assigned a different weight. I'll cover the basics of sample weighting, and demonstrate some simple examples of using this for computing a Bayesian posterior and for importance sampling from an extreme distribution.

This talk is appropriate for moderate to advanced users.

A recording of this webinar can be watched at Statistical-Functions.wmv. The model created during this webinar is at Statistical Functions.ana.

### Principal Components Analysis (PCA)

**Date and Time:** 15 Jan 2009, 10:00am Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Principal component analysis (PCA) is a widely used data analysis technique for dimensionality reduction and identification of underlying common factors. This webinar will provide a gentle introduction to PCA and demonstrate how to compute principal components within Analytica. Intended to be at an introductory level, with no prior experience with PCA (or even knowledge of what it is) assumed.

The model developed during this talk, where the principal components were computed for 17 publically traded stocks based on the previous 2 years of price change data is Principal Component Analysis.ana. A recording of this webinar can be viewed at PCA.wmv.

### Variable Stiffness Cubic Splines

**Date and Time:** 2 October 2008, 10:00am Pacific Daylight Time

**Presenter:** Brian Parsonnet, ICE Energy

**Abstract**

The Variable Stiffness Cubic Spline is a highly robust data smoothing and interpolation technique. A stiffness parameter adjusts the variability of the curve. At the extreme of minimal stiffness, the curve approaches a cubic spline (like CubicInterp) that passes through all data points, while at the other extreme of maximal stiffness, the spline curve becomes the best-fit line. Weight parameters can be used to constrain the curve to include selected points, while smoothing over others. The first, second and third derivatives all exist and are readily available.

I'll introduce and demonstrate User-Defined Functions that compute the variable stiffness cubic spline and interpolate to new points. I'll also show how these curves can be used to detect or eliminate anomalies in data.

You can watch a recording of this webinar at Variable-Stiffness-Cubic-Splines.wmv. The model and library with the vscs functions will be posted here within a few weeks.

### Using Regression

**Date and Time:** Thursday, May 1, 2008 at 10:00 - 11:00 Pacific Daylight Time

**Date and Time:** Thursday, Aug 30, 2007 at 10:00 - 11:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Regression analysis is a statistical technique for curve fitting, discovering relationships in data, and testing hypotheses between variables. In this webinar, I will focus on generalized linear regression, which is provided by Analytica's Regression function, and examine many ways in which is can be used, including fitting simple lines to data, polynomial regression, use of other non-linear terms, and fitting of autoregressive models (e.g., ARMA). I'll examine how we can assess how likely it is the data might have been generated from the particular form of the regression model used. We can also determine the level of uncertainty in our inferred parameter values, and incorporate these uncertainties into a model that uses the result of the regression. The talk will cover Analytica 4.0 functions Regression, RegressionDist, RegressionFitProb, and RegressionNoise.

You can watch a recording of the 1 May 2008 webinar here: Regression.wmv (or on You Tube). The model developed during that webinar is here: Using Regression.ana

### Logistic Regression

**Date and Time:** Thursday, 5 June 2008, 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

** Abstract**

*(Features covered in this webinar require Analytica Optimizer)*

Logistic regression is a technique for fitting a model to historical data to predict the probability of an event from a set of independent variables. In this talk, I'll introduce the concept of Logistic regression, explain how it differs from standard linear regression, and demonstrate how to fit a logistic regression model to data in Analytica. Probit regression is for all practical purposes the same idea as Logistic regression, differing only in the specific functional form for the model. Poisson regression is also similar except is appropriate when predicting a probability distribution over a dependent variable that represents integer "counts". All are examples of generalized linear models, and after reviewing these forms of logistic regression, it should be clear how other generalized linear model forms can be handled within Analytica.

This topic is appropriate for advanced modelers. I will assume familiarity with regression (see the earlier talk on the topic), but will not assume a previous knowledge of logistic regression.

You can watch a recording of this webinar at: Logistic-Regression.wmv. The model developed during this webinar can be downloaded from Logistic_regression_example.ana. You'll also need the file BreastCancer.data.

## Bayesian Techniques

### Bayesian Posteriors using Importance Sampling

**Date and Time:** Thursday, September 4, 2008 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Several algorithms for computing Bayesian posterior probabilities are special cases of *importance sampling*. The webinar of the previous week, Importance Sampling (rare events) introduced importance sampling, covered the theory behind it, how it is applied, and how Analytica's sample weighting feature can be use for importance sampling. This webinar continues with importance sampling, this time exploring how it can be used (at least in some cases) to compute Bayesian posterior probabilities.

I'll provide an introduction to what Bayesian posterior probabilities are, describe a couple importance sampling-based approaches to computing them, and implement a few examples in Analytica. Importance sampling techniques for computing posteriors have limited applicability -- in some cases they work well, other not. I'll try to characterize what those conditions are.

You can watch a recording of this webinar at Posteriors_using_IS.wmv. About two-thirds through the presentation, we noticed a result that seemed to be coming out incorrectly. I explain what the problem was and fix it in Posteriors_using_IS_addendum.wmv. The models used during this presentation can be downloaded from Posterior sprinklers.ana and Likelihood weighting.ana.

### Importance Sampling (Rare events)

**Date and Time:** Thursday, 28 Aug 2008, 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Importance sampling is a technique that simulates a target probability distribution of interest by sampling from a different sampling distribution and then re-weighting the sampled points so that computed statistics match those of the target distribution. The technique has has applicability when the target distribution is difficult to sample from directly, but where the probability density function is readily available. The technique produces valid results in the large sample size limit for any selection of sampling distribution (provided it is absolutely continuous with respect to the target distribution), but best results (i.e., fastest convergence with smaller sample size) are obtained when a good sampling distribution is used. The technique is commonly used for rare-event sampling, where you want to ensure greater sampling coverage in the tails of distributions, where few samples would occur with standard Monte Carlo sampling. During the talk, we develop a rare event model. It also has applicability to the computation of Bayesian posteriors, and sampling of complex distribution.

In this talk we cover the theory behind importance sampling and introduce the sample weighting mechanism that is built into Analytica. We develop a rare-event model to demonstrate how the weighting mechanism is used to achieve the importance sampling. Next week we'll continue with an example of computing a Bayesian posterior probability.

A recording of this webinar can be viewed at Importance-Sampling.wmv. The model developed during this talk can be downloaded from: Importance Sampling rare events.ana.

## Presenting Models to Others

### The Analytica Cloud Player Style Library

**Date and Time:** Tuesday, 31 Jan 2012 10:00 am Pacific Standard Time

**Presenter:** Max Henrion or Fred Brunton (TBD), Lumina Decision Systems

**Abstract**

How to use the ACP Style Library and custom ACP-based web applications. Good practices for designing Analytica-model applications for the web.

A recording of this webinar can be viewed at ACP-Style-Library.wmv.

### Intro to Analytica Cloud Player

**Date and Time:** Thursday, 26 Jan 2012 10:00am Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

This talk provides an introduction to the Analytica Cloud Player (ACP). We'll browse several example models on the web, demonstrating various capabilities and illustrating what a user of models needs to know. You'll see how to set up an ACP account, and we'll cover free usage of ACP with active support, the details of individual and group plans, session credits and pricing. Finally, you'll see how to publish (upload) models to the cloud. This talk will not cover how to tailor a model for the web with specific cloud-player style settings or the ACP style library -- those will be covered the following week.

A recording of this webinar can be viewed at Intro-ACP.wmv. You can also view the Power Point Slides from the talk (The power point slides were a very small part of the webinar).

### Guidelines for Model Transparency

**Date and Time:** 19 Feb 2009, 10:00am Pacific Standard Time

**Presenter:** Max Henrion, Lumina Decision Systems

**Abstract**

What makes Analytica models easy for others to use and understand? I will review some example models that illustrate ways to improve transparency -- or opacity. Feel free to send me your candidates ahead of time! We'll review some proposed guidelines. I hope to stimulate a discussion about what you think works well or not, and enlist your help in refining these guidelines.

You can watch a recording of this webinar at Transparency-Guidelines.wmv.

### Creating Control Panels

**Date and Time:** Thursday, May 29, 2008 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

It is quite easy to put together "control panels" or "forms" for your Analytica models by creating input and output nodes for the inputs and outputs of interest to your model end users. This webinar will cover the basic steps involved in creating and arranging these forms, along with some tricks for making the process efficient. We'll cover the different types of input and output controls that are currently available, the use of text nodes to create visual groupings, use of images and icons, and the alignment commands that make the process very rapid. We'll learn how to change colors, and look at the use of buttons very briefly. This talk is appropriate for beginning Analytica users.

A recording of this webinar can be viewed at Control-Panels.wmv (required Windows Media Player). The model used during this webinar is at Building Control Panels.ana.

### Sneak preview of Analytica Web Publisher

**Date and Time:** Thursday, February 21, 2008, 10:00 - 11:00 Pacific Standard Time

**Presenter:** Max Henrion, Lumina Decision Systems

**Abstract**

In this week's webinar, Max Henrion, Lumina's CEO, will provide a sneak preview of the Analytica Web Publisher. AWP offers a way to make Analytica models easily accessible to anyone with a web browser. Users can open a model, view diagrams and objects, change input variables, and view results as tables and graphs. Users will also be able to save changed models, to revisit them in later sessions. Model builders can upload models into AWP directly from their desktop. Usually, AWP directories are password protected, so only authorized users can view and use models. But, we also plan to make a free AWP directory available for people who want to share their models openly.

AWP is nearing release for alpha testing. We will welcome your comments and hearing how you might envisage using AWP.

*This webinar was not recorded.*

## Application Integration Topics

### OLE Linking

**Time and Date:** Thursday, 27 Mar 2008 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract:**

OLE linking is a commonly used methods for linking data from Excel spreadsheets into Analytica and results from Analytica into Excel spreadsheets. It can be used with other applications that support OLE-linking as well. The basic usage of OLE linking is pretty simple -- it is a lot like copy and paste. This webinar covers basics of using OLE linking of fixed-sized 1-D or 2-D tables. I also demonstrate the basic tricks you must go through to link index values and multi-D inputs and outputs. In addition, we discuss what some of those OLE-link settings actually do, and explain how OLE-connected applications connect to their data sources.

A recording of this webinar can be viewed at 2008-03-27-OLE-Linking.wmv.

Note: Another 10 minute fast-paced video (separate from the webinar) demonstrates linking data from Analytica into Excel, computing something from that data, and linking the result back into Analytica: OLE-to-Excel-and-back.wmv.

### Querying an OLAP server

**Date and Time:** Thursday, February 14, 2008, 10:00 - 11:00 Pacific Standard Time

(*Note: Schedule change from an earlier posting. This is now back to the usual Thursday time. *)

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

In this session, I'll show how the MdxQuery function can be used to extract multi-dimensional arrays from an On-Line Analytical Processing (OLAP) server. In particular, during this talk we'll query Microsoft Analysis Services using MDX. In this talk, I'll introduce some basics regarding OLAP and Analysis Services, discuss the differences between multi-dimensional arrays in OLAP and Analytica, cover the basics of the MDX query language, show how to form a connection string for MdxQuery, and import data. I'll also show how hierarchical dimensions can be handled once you get your data to Analytica.

*Note: Use of the features demonstrated in this webinar require the Analytica Enterprise or Optimizer edition, or the Analytica Power Player. They are also available in ADE.*

The model created during this webinar is available here: Using MdxQuery.ana. You can watch a recording of this webinar here: MdxQuery.wmv (requires Microsoft Media Player)

### Querying an ODBC relational database

**Date and Time:** Thursday, February 7, 2008, 10:00 - 11:00 Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

In this talk I'll review the basics of querying an external relational ODBC database using DbQuery. This provides a flexible way to bring in data from SQL Server, Access, Oracle, and mySQL databases, and can also be used to read CSV-text databases and even Excel. In this talk, I will cover the topics of how to configure and specify the data source, the rudimentary basics of using SQL, the use of Analytica's DbQuery, DbWrite, DbLabels and DbTable functions.

*Note: Use of the features demonstrated in this webinar require the Analytica Enterprise or Optimizer edition, or the Analytica Power Player. They are available in ADE."*

You can grab the model created during this webinar from here: Querying an ODBC relational database.ana. A recording of this webinar can be viewed at Using-ODBC-Queries.wmv.

### Calling External Applications

**Date and Time:** Thursday, Oct 18, 2007 at 10:00 - 11:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

The RunConsoleProcess function runs an external program, can exchange data with that program, and can be used to perform a computation or acquire data outside of Analytica, that then can be used within the model. I'll demonstrate how this can be used with a handful of programs, and code written in several programming and scripting languages. I'll demonstrate a user-defined function that retrieves historical stock data from a web site.

You can watch a recording of this webinar at: Calling-External-Applications.wmv (Requires Windows Media Player)

Files created or used during this webinar can be downloaded:

- Regular Expression Matching.ana
- RegExp.vbs
- Read Historical Stock Data.ana
- For plotting to gnuplot, these gnuplot command files were used. (Note: You may have to adjust some file paths within these files, and within the model): Gnuplot-candlesticks.dat, Gnuplot-3dsurface.dat
- ReadURL.exe (for C++/CLR source code, see Retrieving Content From the Web)

The example of retrieving stock data from Yahoo Finance is also detailed in an article here: Retrieving Content From the Web

### New Functions for Reading Directly from an Excel File

**Date and Time:** Thursday, 24 April 2008 10:00 Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

**(Feature covered requires Analytica Enterprise or better)**

Hidden within the new release of Analytica 4.1 are three new functions for reading values directly from Excel spreadsheets: OpenExcelFile, WorksheetCell, WorksheetRange. These provide an alternative to OLE linking and ODBC for reading data from spreadsheets, which may be more convenient, flexible and reliable in many situations. We have not yet exposed these functions on the Definitions menu or in the Users Guide in release 4.1, since they are still in an experimental stage. I would like know that they have been "beta-tested" in a variety of scenarios before we fully expose them (also, the symmetric functions for writing don't exist yet). In this webinar, I will introduce and demonstrate these functions, after which you can start using them with your own problems.

The model created during this talk is here: Image:Functions for Reading Excel Worksheets.ana. It read from the example that comes with Office 2003, to which we added a few range names during the talk, resulting in SolvSamp.xls. Place the excel file in the same directory as the model. A recording of this webinar can be viewed at Reading-From-Excel.wmv.

### Reading Data from URLs to a Model

**Date and Time:** Thursday, 27 Aug 2009, 10:00am-11:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

*Requires Analytica Enterprise*

The new built-in function, ReadFromUrl, can be used to read data (and images) from websites, such as HTTP web pages, FTP pages, or even web services like SOAP. In this webinar, I'll demonstrate the use of this function in several ways, including reading live stock and stock option price data, posting data to a web form, retrieving a text file from an FTP site, supplying user and password credentials for a web site or ftp service, downloading and displaying images including customized map and terrain images, and querying a SOAP web service.

You can watch a recording of this webinar at ReadFromUrl.wmv. The model with the examples shown during the webinar is at Reading_Data_From_the_Web.ana.

### Using the Analytica Decision Engine (ADE) from ASP.NET

**Date and Time:** Thursday, April 10, 2008 10:00am Pacific Daylight Time

**Presenter:** Fred Brunton, Lumina Decision Systems

The Analytica Decision Engine (ADE) allows you to utilize a model developed in Analytica as a computational back-end engine from a custom application. In this webinar, we'll create a simple active web server application using ASP.NET that sends inputs submitted by a user to ADE, and displays results computed by ADE on a custom web page. In doing this, you will get a flavor how ADE works and how you program with it. If you've never created an active server page, you may enjoy seeing how that is done as well. This introductory session is oriented more towards people who do not have experience using ADE, so that you can learn a bit more about what ADE is and where it is appropriate by way of example.

You can watch a recording of this webinar at ASP-from-ASPNET.wmv. To download the program files that were created during this webinar Click here.

## Optimization

### Introduction to Structured Optimization

**Date and Time:** Thursday, Febrary 24, 2011 at 10:00am PST (1:00pm EST, 6:00pm GMT)

**Presenter**: Paul Sanford, Lumina Decision Systems

**Abstract**

Analytica 4.3 is now available for beta testing and will be released in early March. The new version includes expanded optimization capabilities and simplified workflow for encoding optimization problems. The new **Structured Optimization** framework in 4.3 is centered around a new function, DefineOptimization(), which replaces all three of the previous type-specific functions: LPDefine(), QPDefine() and NLPDefine(). It also introduces a new node type, **Constraint**, which allows you to specify constraints using ordinary expressions. Paul will build up some basic examples using Structured Optimization and field questions from users.

A recording of this webinar can be viewed at: Structured-Optimization.wmv. The example models used during this webinar are: Beer Distribution LP1.ana,Beer Distribution LP2.ana, File:Plane Allocation LP.ana,File:Polynomial NLP.ana

### Interactive Optimization Workshop

**Date and Time:** Thursday, 24 March 2011, 10:00am Pacific Daylight Time

**Presenter:** Paul Sanford, Lumina Decision Systems

**Abstract**

This is an interactive workshop where you will learn the basics of creating Structured Optimization models and challenge yourself to set up and solve some basic examples on your own! No prior training in optimization is required. Trial Downloads of Analytica Optimizer are now available. Attendees are encouraged to have Analytica Optimizer 4.3 installed and running during the workshop.

You can watch a recording of this webinar at: Optimization Workshop.wmv. You can download the models from the talk: Optimal Box.ana and call_center.ana.

### Optimizing Parameters in a Complex Model to Match Historical Data

**Date and Time:** Thursday, 31 March 2011, 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Ph.D., Lumina Decision Systems

**Abstract**

Almost all quantitative models have parameters that must be assessed by experts or estimated from historical data. Estimation from historical data can be complicated by the presence of variables that are either unobservable or unavailable in the historical record. Maximum likelihood estimation addresses this by finding the parameter settings that maximize the likelihood of the historical data predicted by the model. In this talk, I will formulate the parameter fitting task as a structured optimization problem (NLP), providing a hands-on demonstration of the new structured optimization features in Analytica 4.3.

A webinar recording of this can be viewed at Parameter-Optimization.wmv. The model file developed during the webinar is Parameter_Optimization.ana. The webinar also mentioned the Arbitrage Theorem.

### Optimization with Uncertainty

**Date and Time:** Thursday, 14 April 2011, 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Ph.D., Lumina Decision Systems

**Abstract**

Analytica analyzes uncertainty by conducting a Monte Carlo analysis. When you optimize decision variables in a model containing uncertainty, you have a choice: You can perform one optimization over the Monte Carlo analysis, or you can perform a Monte Carlo sampling of optimizations (i.e., the Monte Carlo is inside the optimization, or the optimization is inside the Monte Carlo). The first case is used when the decision must be taken while the quantities are still uncertain. The second case is used when the values of the uncertain quantities will be resolved before the decisions are taken.

To illustrate, consider the situation faced by a relief organization that provides aid to victims of natural disasters. In one situation, a decision must be made regarding how to allocate resources among several currently occuring famines. At the time the decision must be made, the actual intensity, progress and aid effectiveness for each famine is uncertain. In a different situation, the organization wants to characterize the uncertainty in its need for resources for the upcoming year, perhaps forecasting the damage from next year's famines, and using these forecasts in its budgeting and planning decisions.

You can watch a recording of this webinar at Optimization-w-Uncertainty.wmv. The example model developed during the webinar can be downloaded from Famine Relief. You can also download the PowerPoint slides from the talk.

### Neural Networks

**Date and Time:** Thursday, 21 April 2011, 10:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Ph.D.

**Abstract**

A feed-forward artificial neural networks is a non-linear function that predicts one or more outputs from a set of inputs. These are usually used in two layers, where the first layer of inputs are weighted and summed, and then passed through a sigmoid function to determine the activations of a hidden layer, those those activations are weighted, summed and then passed through a sigmoid function to predict the final output. A training phase is used to adjust the weight to "fit" an example data set.

In this webinar, I'll create a nearal network model in Analytica and train it on example data as a demonstration of the use of structured optimization. It provides a simple and easily understood example of the use of intrinsic indexes in a structured optimization model, while at the same time introducing the basics of the interesting topic if neural networks.

You can watch a recording of this webinar at Neural-Networks.wmv. The neural network model created during the webinar (requires Analytica Optimizer) is Neural-Network.ana.

### Introduction to Linear and Quadratic Programming

**Date and Time:** Thursday, Oct 11, 2007 at 10:00 - 11:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

This talk is an introduction to linear programming and quadratic programming, and an introduction to solving LPs and QPs from inside an Analytica model (via Analytica Optimizer). LPs and QPs can be efficiently encoded using the Analytica Optimizer functions LpDefine and QpDefine. I'll introduce what a linear program is for the sake of those who are not already familiar, and examine some example problems that fit into this formalism. We'll encode a few in Analytica and compute optimal solutions. Although LPs and QPs are special cases of non-linear programs (NLPs), they are much more efficient and reliable to solve, avoid many of the complications present in non-linear optimization, and fully array abstract. Many problems that initially appear to be non-linear can often be reformulated as an LP or QP. We'll also see how to compute secondary solutions such as dual values (slack variables and reduced prices) and coefficient sensitivies. Finally, LpFindIIS can be useful for debugging an LP to isolate why there are no feasible solutions.

You can watch a recording of this webinar here: LP-QP-Optimization.wmv (requires Windows Media Player)

The model file created during this webinar is here: LP QP Optimization.ana

### Non-Linear Optimization

**Date and Time:** Thursday, Oct 4, 2007 at 10:00 - 11:00am Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

This talk focuses on the problem of maximizing or minimizing an objective criteria in the presence of contraints. This problem is referred to as a non-linear program, and the capability to solve problems of this form is provided by the Analytica Optimizer via the NlpDefine function. In this talk, I'll introduce the use of NlpDefine for those who have not previously used this function, and demonstrate how NLPs are structured within Analytica models. I'll examine various challenges inherent in non-linear optimization, tricks for diagnosing these and some ways to address these. We'll also examine various ways in which to structure models for parametric analyses (e.g., array abstraction over optimization problems), and optimizations in the presence of uncertainty.

You can watch a recording of this session here: Nonlinear-Optimization.wmv

During the talk, these two models were created:

## Vertical Applications and Case Studies

### Regional Weather Data Analysis

**Date and Time:** Thursday, 22 April 2010 10:00am Pacific Daylight Time

**Presenter:** Brian Parsonnet, Ice Energy

**Abstract**

There are numerous sources of weather data on the web. Users of this data face a few common problems: how to gather the data in volume, how to normalize the data regardless of source, and how to analyze the results to generate insight. Analytica is the perfect tool to address all three issues simply and efficiently. A sample model will be shown illustrating some of techniques.

A recording of this webinar can be viewed at Regional-Weather-Analysis.wmv. The model shown during the talk is Weather analysis.ana, and the data file used by this model for Burbank weather can be downloaded from Burbank.zip (remember to Unzip it first to Burbank.txt). To avoid issues with ownership of the data, the temperatures in this file have been randomized (so the data is not accurate) and other fields zeroed out, but this will still allow you to play with the model and data.

### Automated Monitoring and Failure Detection

**Date and Time:** 5 Feb 2009, 10:00am Pacific Standard Time

**Presenter:** Brian Parsonnet, ICE Energy

**Abstract**

In many complex physical systems, the automatic and proactive detection of system failures can be highly beneficial. Often dozens of sensor readings are collected over time, and a computer analyzes these to detect when system behavior is deviating from normal. Sounding an alert can then facilitate early intervention, perhaps catching a component that is just starting to go bad.

In a complex physical system with multiple operating modes and placed in a changing environment, anomaly detection is a very difficult problem. Simple sensor thresholds (and other related approaches) lack context-dependence, often making these simple approaches insufficient for the task. What is normal for any given sensor depends on the system's operating mode, time of day, activities in progress, and environmental factors. Simple thresholds that don't take such context into account either end up being so loose that they miss legitimate anomalies, or so tight that too many excess alarms are generated during normal conditions.

In this webinar, I'll show an expert system I've developed in Analytica that detects anomalies and developing failures in our deployed cooling system products. Data from dozens of sensors is collected in 5 minute intervals and the system transitions through multiple operating modes, daily and seasonal environmental fluctuations, and system demands. The Analytica model provides a framework in which complex rules that take multiple factors into account can be expressed, and used to estimate acceptable upper and lower operating ranges that are dynamically adjusted across each moment in time, taking into account whatever context is available. The Analytica environment presents a very readable and understandable language for expressing monitoring rules, and the overall transparency enables us to spot where other rules are needed and what they need to be.

A recording of this webinar can be watched at Failure-Detection.wmv.

### Data Center Capacity Planning

*Please note that this presentation will be on Wednesday rather than Thursday this week.*

**Date and Time:** Wednesday, October 21, 2008 10:00am Pacific Daylight Time

**Presenter:** Max Henrion, Lumina Decision Systems

**Abstract**

Data center energy demands are on the rise, creating serious financial as well as infrastructural challenges for data center operators. In 2006, data centers were responsible for a costly 1.5 percent of total U.S. electricity consumption, and national energy consumption by data centers is expected to nearly double by 2011. For data center operators, this means that many data centers are reaching the limits of power capacity for which they were originally designed. In fact, Gartner predicts that 50 percent of data centers will discover they have insufficient power and cooling capacity in 2008.

This week's presentation will provide an overview of ADCAPT -- the Analytica Data Center Capacity Planning Tool. For this webinar, the User Group will be joining a presentation that is also being given outside of the Analytica User Group, but I (Lonnie) think is also of interest to the User Group community in that it shows of an example of a re-usable Analytica model, containing several very interesting and novel techniques, applied to a very interesting application area.

Due to technical difficulties, this webinar was not recorded.

### Modeling the Precision Strike Process

**Date and Time:** Thursday, October 16, 2008, 10:00am Pacific Daylight Time

**Presenter:** Henry Neimeier, MITRE

**Abstract**

We describe a new paradigm for modeling, and apply it to a simple view of the precision strike attack process against mobile targets. The new modeling paradigm employs analytic approximation techniques that allow rapid model development and execution. These also provide a simple dynamic analytic risk evaluation capability for the first time. The beta distribution is used to summarize a broad range of target dwell and execution time scenarios in compact form. The data processing and command and control processes are modeled as analytic queues.

You can watch a recording of this webinar at: Precision-Strike-Process.wmv. Several related papers and materials are also available, including:

- A New Paradigm For Modeling The Precision Strike Process (U) by H. Neimeier from MILCOM96
- Milcom96.ana -- the model from the talk and above paper.
- Analytic Uncertainty Modeling Versus Discrete Event Simulation by H. Neimeier, PHALANX March 1996.
- Analytica Queuing Networks by H. Neimeier, Proc. 12th Int'l Conf. Systems Dynamics Soc. 1994.
- The Architecture of CAPE Models by K.P. Kuskey and S.K. Parker, MITRE Tech. Report.
- Analytical Modeling in Support of C4ISR Mission Assessment (CMA) by F.R. Richards, H.A. Neimeier, W.L. Hamm, and D.L. Alexander, 3rd Int'l Symp. on Command and Control Research and Technology, 1997.
- Analyzing Processes with HANQ by H. Neimeier and C. McGowan, MITRE, from INCOSE96.
- Functions for drawing
*radar plots*: Radarplt.ana - Power point slides: Cape.ppt, PGMrisk.ppt, and JDEMweb.ppt

### Modeling Utility Tariffs in Analytica

**Presenter:** Brian Parsonnet, Ice Energy

**Date and Time:** Thursday, Nov 8, 2007 at 10:00 - 11:00am Pacific Standard Time

Modeling utility tariffs is a tedious and complicated task. There is no standard approach to how a utility tariff is constructed, and there are 1000’s of tariffs in the U.S. alone. Ice Energy has made numerous passes at finding a “simple” approach to enable tariff vs. product analysis, including writing VB applications, involved Excel spreadsheets, using 3rd party tools, or outsourcing projects to consultants. The difficulty stems from the fact that there is little common structure to tariffs, and efforts to standardize on what structure does exist is confounded by an endless list of exceptions. But using the relatively simple features of Analytica we have created a truly generic model that allows a tariff to be defined and integrated in just a few minutes. The technique is not fancy by Analytica standards, so this in essence demonstrates how Analytic’s novel modeling concept can tackle tough problems.

You can watch a recording of this webinar at: 2007-11-08-Tariff-Modeling (Requires Windows Media Player)

### Modeling Energy Efficiency in Large Data Centers

**Date and Time**Thursday, Oct 25, 2007 at 10:00 - 11:00am Pacific Daylight Time

**Presenter:**Surya Swamy, Lumina Decision Systems

**Abstract**

The U.S. data center industry is witnessing a tremendous growth period stimulated by increasing demand for data processing and storage. This has resulted in a number of important implications including increased energy costs for business and government, increased emissions from electricity generation, increased strain on the power grid and rising capital costs for data center capacity expansion. In this webinar, Analytica's dynamic modeling capabilities coupled with it's advanced uncertainty capabilities, which offer tremendous support in building cost models for planning and development of energy efficient data centers, will be illustrated. The model enables users to explore future technologies, the performance, costs and efficiencies of which are uncertain and hence to be probabilistically evaluated over time.

You can watch a recording of this presentation at: Data-Center-Model.wmv (Requires Windows Media Player)

## Graphing

### Creating Scatter Plots

**Date and Time:** Thursday, May 15, 2008 at 10:00 - 11:00am Pacific Daylight
Time

**Date and Time:** Thursday, Aug 23, 2007 at 10:00 - 11:00am Pacific Daylight
Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

This webinar focuses on utilizing graphing functionality new to Analytica 4.0, and specifically, functionality enabling the creative use of scatter plots. The talk will focus primarily on techniques for simultaneously displaying many quantities on a single 2-D graph. I'll discuss several methods in which multiple data sources (i.e., variable results) can be brought together for display in a single graph, including the use of result comparison, comparison indexes, and external variables. I'll describe the basic new graphing-role / filler-dimension structure for advanced graphing in Analytica 4.0, enabling multiple dimensions to be displayed on the horizontal and vertical axes, or as symbol shape, color, or symbol size, and how all these can be rapidly pivoted to quickly explore the underlying data. I'll discuss how graph settings adapt to changes in pivot or result view (such as Mean, Pdf, Sample views).

A recording of this webinar can be viewed at Scatter-Plots.wmv.

Model used: During this webinar, I started with some example data in the model Chemical elements.ana. The original file is in the form before graph settings were changed. By the end of the webinar, many graph settings had been altered, and various changes made, resulting in Scatter-Plots.ana (during the Aug 23 presentation, this was the final model: Chemical elements2.ana).

### Graph Style Templates

** Date and Time:** Thursday, February 28, 2008, 10:00 - 11:00 Pacific Standard Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

Graph style templates provide a convenient and versitile way to bundle graph setup options so that they can be reused when viewing other result graphs. For example, if you've discovered a set of colors and fonts and a layout that creates the perfect pizzazz for your results, you can bundle that into a template where you can quickly select it for any graph. In this talk, I'll introduce how templates can be used and how you can create and re-use your own. I'll show the basics of using existing templates, previewing what templates will look like, and applying a given template to a single result or to your entire model. We'll also see how to create your own templates, and in the process I'll discuss what settings can be controlled from within a template. I'll discuss how graph setup options are a combination of global settings, template settings, and graph-specific overrides. I'll show how to place templates into libraries (thus allowing you to have template libraries that can be readily re-used in different models), and even show how to control a few settings using templates that aren't selectable from the Graph Setup UI. I'll also touch on how different graph setting are associated with different aspects of a graph, ultimately determining how the graph adapts to changes in uncertainty view or pivots.

The model created during this webinar is here: Graph style templates.ana. You can watch a recording of the webinar here: Graph-Style-Templates.wmv.

## Scripting

### Button Scripting

**Date and Time:** Thursday, Sept. 6, 2007 at 10:00 - 11:00am Pacific Daylight Time

**Presenter:** Max Henrion, Lumina Decision Systems

**Abstract**

This webinar is an introduction to Analytica's typescript and button scripting. Unlike variable definitions, button scripts can have side-effects, and this can be useful in many circumstances. I'll cover the syntax of typescript (and button scripts), and how scripts can be used from buttons, picture nodes or choice inputs. I'll introduce some of the Analytica scripting language to those who may have seen or used it before. And we'll examine some ways in which button scripting can be used.

You can watch the recording of this webinar here: Button Scripting.wmv (Requires Windows Media Player or equiv). The model files and libraries used during the webinar are in Ana_tech_webinar_on_scripting.zip.

## Analytica User Community

### The Analytica Wiki, and How to Contribute

**Date and Time:** (tentative) Thursday, October 30, 2008, Pacific Daylight Time

**Presenter:** Lonnie Chrisman, Lumina Decision Systems

**Abstract**

The Analytica Wiki is a central repository of resources for active Analytica users. What's more, you -- as an active Analytica user -- can contribute to it. As an Analytica community, we have a lot to learn from each other, and the Analytica Wiki provides one very nice forum for doing so. You can contribute example models and libraries, hints and tricks, and descriptions of new techniques. You can fix errors in the Wiki documentation if you spot them, or add to the information that is there when you find subtleties that are not fully described. If you spend a lot of time debugging a problem, after solving it you could document the issue and how it was solved for your own benefit in the future, as well as for others in the user community who may encounter the same problem. When you publish a relevant paper, I hope you will add it to the page listing publications that utilize Analytica models.

I will provide a quick tour of the Analytica Wiki as it exists today. I'll then provide a tutorial on contributing to the Wiki -- e.g., the basics of how to edit or add content. The Wikipedia has had tremendous success with this community content contribution model, and I hope that after this introduction many of you will feel more comfortable contributing to the Wiki as you make use of it.

Due to a problem with the audio on the recording, the recording of this webinar is not available.

## Licensing or Installation

### Reprise License Manager Tutorial

**Date and Time:** Wednesday, 11 March 2010, 10:00am Pacific Standard Time

**Presenter:** Bob Mearns, Reprise Software Inc.

**Abstract**

The Reprise License Manager (RLM) allows all Analytica and ADE licenses within an organization to be managed from a central server. RLM can be used with either *floating* or *named-user* licenses.

This tutorial on RLM administration is being given by Bob Mearns, lead software developer at Reprise Software, Inc., who has over 15 years' experience developing and supporting software license managers. This session will focus on:

- Basic RLM Server Setup
- How and where RLM looks for licenses
- Using the RLM Web Server Admin Interface
- Using RLM diagnostics, new in RLM v8
- A systematic approach to diagnosing license server connectivity problems

There is a big focus in this talk on how to debug problems with the RLM license manager, and in the process many of the technical details pertaining to the RLM setup are covered. This talk is most relevant for IT managers who administer the license server, and for people who may be installing the RLM server who would like a more thorough understanding of how things work. The RLM license manager is used to host centrally managed licenses, which includes floating and named-user licenses.

This talk is being provided by Reprise Software.

This webinar may be viewed here: RLM-troubleshooting.wmv. The trouble-shooting tips document covered in the talk is at RLM Troubleshooting Tips.

## See also

- Archived webinars
- Analytica User Group
- Configuring an RLM Server -- step-by-step for installing the RLM server
- How to Install Analytica -- Centrally Managed License -- for the Analytica user's side installation

Enable comment auto-refresher