Working With the New Data Model

In this section:

As of PMF 5.3.2, the model where each individual measure needs to be loaded as a separate, schedulable entity is no longer applicable. Instead, PMF now allows you to group (and to schedule loads for) your loadable data by its source.

You can specify how that data should be harvested from individual physical data sources. You can:

You can also specify that some datapoints are acquired from end users. Typically, you do this if you do not already have a physical source for this data, but want PMF to become the system of record for the data and to track it as it is captured.

PMF provides user-input features that allow it to capture, update, validate, and store the data in datapoints for use downstream.

When setting up user-entered sources, specify the level of dimensionality common to all datapoints in a single source group. At capture time, end users can input all datapoints at the same time, increasing the speed and convenience for end users.

Once data has been harvested from physical sources or data has been entered by the user, PMF can regularly recalculate any derived datapoints. Recalculation is performed in lineage order. This means that PMF itself determines which derived datapoints have dependencies, and waits to perform recalculation on any datapoint until all of its precursor dependencies have been resolved.

After all datapoints are reloaded, and derived datapoints are fully recalculated, PMF checks for measure dependencies of these datapoints. It then copies the data, as you designed it, to the measures.

Top of page

The Core Paradigm

The new metrics model in PMF allows you to think differently about measure loads than in previous versions of PMF:

Top of page

Migrating to the New Architecture

In this section:

How to:

PMF allows you to automatically migrate legacy measure load scripts that were created prior to PMF Release 5.3.2 to the new architecture.

How the Migrator Works

If you were a user of PMF prior to release 5.3.2, you used legacy measures and set up loads directly on them. There were no sources or datapoints. Automated Migration enables PMF to automatically migrate all old measure components to use the new architecture.

Note: You can rename the created datapoints at any time after migration is completed.

Should I Upgrade Legacy Measures?

Before upgrading legacy measures, take the following into consideration:

Procedure: How to Perform a Migration

Before performing a migration, note the following:

To perform a migration to the new architecture:

  1. From the Manage tab, click the Data Mart subtab.
  2. Click Migrate Legacy Measures. The Migrate Legacy Measures to New panel opens, as shown in the following image.

    Migrate Legacy Measures to New panel

    The following options are available:

    Retain the Legacy Load Mode

    Enable this option to transfer the deletion option used by the migrated measures into its equivalent default Wipe setting on the sources to be created. This setting is enabled by default.

    Retain Alternate Targets

    If you are using Alternate Targets (for example, Benchmark, Stretch, Budget, and Forecast targets), enable this option to make sure that they are migrated.

    If you are not using Alternate Targets, it is recommended that you leave this option disabled. This is the default setting.

    Delete Measures that do not migrate

    If enabled, the migrator will delete any measures that do not migrate. It is recommended that you select this option after doing a test migration and determining that it is safe to do so. This option is disabled by default.

    Note: Generally, legacy measures that are incomplete will not be migrated, since PMF does not have any way to map these into the new architecture. You can always choose not to delete these, manually set them up as new measures, and then delete the unconverted legacy measures.

    Delete any existing Sources and dependent objects

    If enabled, the migrator will delete any existing sources and datapoints that were created using the new architecture. This will delete any new sources, datapoints, and measures in your system.

    If you have this option disabled, and you have new measures, existing sources and existing datapoints, the migrator will produce an error. This is to prevent the overwriting of any new components. This option is disabled by default.

    Base Loadable Datapoint names

    The following options are available:

    • on legacy Measure names. This option names loadable datapoints created in each source using the names of the measures to which the migrator links them.
    • on new Loadable Source names. This option names loadable datapoints created in each source using the source name, so that the names contain the Master File information.
  3. Click Migrate.

    PMF will perform the migration. This operation can take a few minutes and it is important to wait until it is complete before performing another operation. Once done, a status message will confirm that migration is completed.

  4. Once the migration is done, return to the Manage tab and review the results.

Top of page

Data Lineage

Data lineage refers to the entire path of data through the PMF load architecture.

Step 1

For PMF, data lineage starts at the source. Datapoints that are linked to a source are left-side endpoints in the lineage. This means they are harvested directly from:

Example 1: In the case of a manufacturing company, there might be sources defined to harvest data from systems in Warehousing, Production Line, Quality Control, Shipping and Logistics, Supply Chain/Purchasing, Prospect Management, and Wholesale Sales. PMF harvests data as follows:

Step 2

Lineage then proceeds through each generation of derived datapoints. There is no limit to the number of phases possible.

Example 2: Continuing from Example 1, you can derive the following datapoints from those you loaded:

These datapoints need to be calculated in the following order:

  1. Total Product Cost, Total Sale Cost
  2. Net COGS
  3. Profit
  4. Margin

Step 3

Lineage then ends at measures.

Top of page

Load, Recalculate, and Copy (LRC) Loads


To load measures, the PMF load architecture puts data through three phases:

  1. Load. All sources indicated for Load are loaded and data is fed into the datapoints for each source.
    • Volitional load. The Load button is clicked on any source. In this case, the only source that will be loaded is the one you indicated for load.
    • Scheduled direct load. A source load is scheduled to run at that time.
    • Scheduled optional cascaded load. If any of the dimensions that are linked to the source are reloaded, a source load could be forced (cascaded) depending on how the dimension Cascade Load settings are configured.
    • Scheduled forced cascaded load. If any of the dimensions that are linked to the source are reorganized, a source load will always be forced (cascaded), regardless of how the dimension Cascade Load settings are configured.

    Note: During schedule processing, if more than one source has to be loaded during the scheduled run, all scheduled sources would be loaded before the next step runs. This prevents inefficiently repeating the recalculation.

  2. Recalculate. PMF looks at all the sources that were reloaded, and analyzes all derived datapoints with dependencies on the sources that were loaded. PMF then analyzes the lineage of all derived datapoints to determine the correct order to recalculate them, respecting their dependencies. Finally, PMF performs the recalculation step, in phases, with the number of phases determined by the generations of lineage of the derived datapoints.
  3. Copy. PMF analyzes all measures that use the datapoints that were recalculated in Step 2. It then copies the data for the linked datapoints into the measures cube, making the data ready for reporting and dashboard publication.

Reference: Checking the Administrative Log Reports

PMF logs all activity that involves load, recalculation, or copy actions. Logged data is stored as peer data in the PMF Data Mart.

PMF captures the following data in its source load logs: