# Acumen Fuse - Understanding how Scores are Calculated

In this article we highlight the difference between the two methods of scoring in Acumen Fuse.

Acumen Fuse is a powerful tool for undertaking forensic schedule analysis and scoring projects against standard metrics.  But before presenting the scores generated by Acumen Fuse, it is important to understand how the software calculates the scores and the key difference between the two available scoring methods.

Scoring Method Options

If you have only ever used the default settings in Acumen Fuse you may not have realised that there are actually two options for scoring schedules. By default, Acumen Fuse calculates Scores with the Record Fails if 1 Metric Fails method.  The other option which is available is known as Average of Metrics method which works in a completely different way.

To locate your current setting, navigate to the User Interface section of the Deltek Acumen Options: What is the difference between the two scoring methods?

Both options provide results based on a different calculation method.

Record Fails if 1 Metric Fails:

• equally weights every metric within a metric library
• each activity either receives a Pass = 1 or Fail = 0 score on the activity with only one metric needing to fail for the activity to be counted as a fail
• typically results in lower scores
• the default set by Acumen Fuse
• used as the basis for benchmarking

Average of Metrics:

• the score is calculated using and average of the metric scores
• the metrics are weighted, ie an activity does not Pass or Fail but receives a weighted score
• the weighting of each metric can be customised or default weighting applied
• typically results in higher scores

Example - Record Fails if 1 Metric Fails

To explain how the end results differ, we put together this simple working example:

• an MS Project schedule which has 30 activities (including Summary tasks)
• uses the Lags metric library
• version 8.6 of Acumen Fuse

In the table below, 'X' indicates that the activity has met the criteria (ie it 'trips' ) for the metric specified: In the above example, each activity receives a score of either Pass or Fail.

If an activity does not meet any of the criteria set for each of the metrics, it receives a Pass resulting in a Score of 100% for that activity.

To illustrate this, activity “2 Start Project” did not meet any of the 'fail criteria' and therefore is considered to have Passed resulting in a score of 100%. Conversely however activities “11 Facility engineering” and 27 Early Worktriggered a threshold that caused one metric to fail.

The total score for this metric library is then calculated based on the following formula:

Score (%)       = Sum of all Activity Scores / No of Activities

Using the above example:

Score (%)       = 2500% / 30
= 83%

And this is what we would see in the final Fuse outputs: Example - Average of Metrics

Let's examine what happens when we switch the basis of the calculation.

In this method each metric has a weighting applied.  The default weighting of each metric is specified in the Metrics tab in Acumen Fuse and in this example the Lags Metrics Library weighting as it appears in Acumen Fuse is shown below. The weightings for this metric library as shown in Acumen Fuse have also been tabled below. In the table below, instead of the metric criteria having been met shown as an X the weighted value of the criteria is shown. The score for each activity is calculated based on the following formula:

Activity Score (%)     = 1 - (total for the activity / maximum weighted value x 100 )

Using the above example:

Activity “11 Facility engineering” Score %           = 1 - ( -5 / -35 X 100)
= 1 – (14%)
= 86%

The total score for this metric library is calculated using the same formula as applied to the Record Fails if 1 Metric Fails scoring method.

Score (%)       = Sum of all Activity Scores / No of Activities

Using the above example:

Score (%)       = 2857% / 30
= 95%

The Score as illustrated in the Ribbon Analyzer in Acumen Fuse is below: How much difference does the scoring method make to the overall score?

In the example above the same base schedule was analysed using two different calculation methods which resulted in a difference of 12%. Only 5 of the 30 activities achieved a different score as outlined in the table below: Whilst our example above was a smaller, less complicated schedule than what we're typically used to, one can only imagine the variances that would emerge should a more 'typical' schedule be examined. It could be expected that scoring variances would differ by more than 12%.

When we look at the example schedule using the "Schedule Quality" metric group, the overall score increased by 59% from 23% (based on Record Fails if 1 Metric Fails) to 82% (based on Average of Metrics):

Record Fails if 1 Metric Fails Average of Metrics Is it ok to just run Acumen Fuse without understanding how the scores are calculated?

Whether you’re using Acumen Fuse to score your own schedule against schedule metric targets or undertaking an analysis for a client’s schedule, it is important that the results are qualified and the basis of the scoring transparent.  Consistent and transparent scoring also ensures an even playing field when comparing the results between different schedules.

Which scoring method should I use?

In most instances, the Record Fails if 1 Metric Fails is the preferred method as the scoring used for the Fuse Schedule Index in Benchmarking uses this same scoring method.  However, as shown in the example, the Average of Metrics scoring results in higher scores and provides greater flexibility with regards to the importance of each metric measured against an activity.

So if your organisation pre-defines the metrics and weightings that are mandated for schedule quality, then this methodology could be adopted in a consistent manner using templated metric groups.

What does GBA use?

At GBA Projects we have developed our own Metric Libraries in Acumen Fuse and can use the following combination when scoring schedules:

• Standard Acumen Fuse Metric Libraries with Record Fails if 1 Metric Fails scoring; and
• Customised GBA Metric Libraries with Average of Metrics

The metrics within the library have been carefully selected and standardised to ensure that each Fuse analysis is consistent.

### Clients

• • • • • • • • • • • • • • • 