In this article we highlight the difference between the two methods of scoring in Acumen Fuse.

Acumen Fuse is a powerful tool for undertaking forensic schedule analysis and scoring projects against standard metrics.  But before presenting the scores generated by Acumen Fuse, it is important to understand how the software calculates the scores and the key difference between the two available scoring methods.

Scoring Method Options

If you have only ever used the default settings in Acumen Fuse you may not have realised that there are actually two options for scoring schedules. By default, Acumen Fuse calculates Scores with the Record Fails if 1 Metric Fails method.  The other option which is available is known as Average of Metrics method which works in a completely different way. 

To locate your current setting, navigate to the User Interface section of the Deltek Acumen Options:

FuseMetricOptions

What is the difference between the two scoring methods?

Both options provide results based on a different calculation method.

Record Fails if 1 Metric Fails:

  • equally weights every metric within a metric library
  • each activity either receives a Pass = 1 or Fail = 0 score on the activity with only one metric needing to fail for the activity to be counted as a fail
  • typically results in lower scores
  • the default set by Acumen Fuse
  • used as the basis for benchmarking

Average of Metrics:

  • the score is calculated using and average of the metric scores
  • the metrics are weighted, ie an activity does not Pass or Fail but receives a weighted score
  • the weighting of each metric can be customised or default weighting applied
  • typically results in higher scores

 

Example - Record Fails if 1 Metric Fails

To explain how the end results differ, we put together this simple working example:

  • an MS Project schedule which has 30 activities (including Summary tasks)
  • uses the Lags metric library
  • version 8.6 of Acumen Fuse

In the table below, 'X' indicates that the activity has met the criteria (ie it 'trips' ) for the metric specified:

table recordfails

In the above example, each activity receives a score of either Pass or Fail.

If an activity does not meet any of the criteria set for each of the metrics, it receives a Pass resulting in a Score of 100% for that activity.

To illustrate this, activity “2 Start Project” did not meet any of the 'fail criteria' and therefore is considered to have Passed resulting in a score of 100%. Conversely however activities “11 Facility engineering” and 27 Early Worktriggered a threshold that caused one metric to fail.

The total score for this metric library is then calculated based on the following formula:

Score (%)       = Sum of all Activity Scores / No of Activities

Using the above example:

Score (%)       = 2500% / 30
                         = 83%

And this is what we would see in the final Fuse outputs:

score recordfails

 

Example - Average of Metrics

Let's examine what happens when we switch the basis of the calculation.

In this method each metric has a weighting applied.  The default weighting of each metric is specified in the Metrics tab in Acumen Fuse and in this example the Lags Metrics Library weighting as it appears in Acumen Fuse is shown below. 

weighting

The weightings for this metric library as shown in Acumen Fuse have also been tabled below.

weighting table

In the table below, instead of the metric criteria having been met shown as an X the weighted value of the criteria is shown.

score summary

The score for each activity is calculated based on the following formula:

Activity Score (%)     = 1 - (total for the activity / maximum weighted value x 100 )

Using the above example:

Activity “11 Facility engineering” Score %           = 1 - ( -5 / -35 X 100)
                                                                                       = 1 – (14%)
                                                                                       = 86%

The total score for this metric library is calculated using the same formula as applied to the Record Fails if 1 Metric Fails scoring method.

Score (%)       = Sum of all Activity Scores / No of Activities

Using the above example:

Score (%)       = 2857% / 30
                         = 95%

The Score as illustrated in the Ribbon Analyzer in Acumen Fuse is below:

ribbon analyser

 

How much difference does the scoring method make to the overall score?

In the example above the same base schedule was analysed using two different calculation methods which resulted in a difference of 12%. Only 5 of the 30 activities achieved a different score as outlined in the table below:

comparison score

Whilst our example above was a smaller, less complicated schedule than what we're typically used to, one can only imagine the variances that would emerge should a more 'typical' schedule be examined. It could be expected that scoring variances would differ by more than 12%.

When we look at the example schedule using the "Schedule Quality" metric group, the overall score increased by 59% from 23% (based on Record Fails if 1 Metric Fails) to 82% (based on Average of Metrics):

Record Fails if 1 Metric Fails

score 1

Average of Metrics

score 2

 

Is it ok to just run Acumen Fuse without understanding how the scores are calculated?

Whether you’re using Acumen Fuse to score your own schedule against schedule metric targets or undertaking an analysis for a client’s schedule, it is important that the results are qualified and the basis of the scoring transparent.  Consistent and transparent scoring also ensures an even playing field when comparing the results between different schedules.

 

Which scoring method should I use?

In most instances, the Record Fails if 1 Metric Fails is the preferred method as the scoring used for the Fuse Schedule Index in Benchmarking uses this same scoring method.  However, as shown in the example, the Average of Metrics scoring results in higher scores and provides greater flexibility with regards to the importance of each metric measured against an activity. 

So if your organisation pre-defines the metrics and weightings that are mandated for schedule quality, then this methodology could be adopted in a consistent manner using templated metric groups.

 

What does GBA use?

At GBA Projects we have developed our own Metric Libraries in Acumen Fuse and can use the following combination when scoring schedules:

  • Standard Acumen Fuse Metric Libraries with Record Fails if 1 Metric Fails scoring; and
  • Customised GBA Metric Libraries with Average of Metrics

The metrics within the library have been carefully selected and standardised to ensure that each Fuse analysis is consistent.

 

Other recent articles

Full blog index
December 01, 2021
December 01, 2021

Find out how GBA Projects can help your project perform.

Contact us.

Clients

  • 01-santos-logo.jpg
  • 02-bhp-logo-new.jpg
  • 03-hansen-logo.jpg
  • 04-sydney-water.jpg
  • 05-beach-energy.jpg
  • 05-kbr-logo.jpg
  • 06-aurecon-logo.jpg
  • 08-riot-tinto-logo.jpg
  • 09-sa-govt-logo.jpg
  • 10-spotless-logo.jpg
  • 11-built-environs-logo.jpg
  • 12-dyno-logo.jpg
  • 13-sa-power-logo.jpg
  • 14-incitec-logo.jpg
  • 15-electranet-logo.jpg