Benchmarking OLD

Benchmarking

What is Benchmarking and how does it work?

By understanding application usage within an organisation, you can modify your decision-making process to reduce Application Sprawl.

Benchmarking is the practice of comparing business processes and performance metrics to best practices from other companies in the industry. Quality, time, and cost elements are measured and compared.

Benchmarking methodologies use commonly available metrics such as IT labor, hardware and software costs, numbers of devices, and FTE and calculate comparative metrics such as:

  • Broad IT spend per head, and per platform
  • Numbers of FTE, including different ratios for ICT staff utilisation within an organisation
  • Infrastructure metrics such as the number of OS instances, and the number of desktop and laptop devices in an organisation. This includes storage metrics.

Current benchmarking focuses on readily available IT metrics and compares these to similar organisations, typically normalising for variations between different technologies, service delivery models, and skill sets.

This information may be coupled with end-user surveys to try and quantify usage and user experience, asking questions such as:

  • Is the application readily available to use?
  • Is the application responsive?
  • Does the application meet my business needs?

While traditional benchmarking gives an indication of an application’s value to an organisation, the results are not completely evidence-based.

Why is Benchmarking important?

Merely taking available cost, FTE, and infrastructure metrics and calculating ratios does not provide for the deeper level of analysis required to understand the impact and effect of IT on your business.

Information from user surveys is based on expectations and perception; often with little factual data to substantiate the response.

The lack of transparency in benchmarking activities may not provide enough concrete evidence to pinpoint the causes of inefficiencies in your IT operation.

Cyber Securuty Professionals Working

Introspectus Key Features

What's the solution?​

Assessor Benchmarking

Introspectus uses evidence-based data to provide for a much deeper level of analysis of your IT environment, such as:

  • Time actually spent using an application (measured by keyboard and mouse activity) by region or by department and mapped over time
  • Timeframes within the day (or night) when applications are used more (or less)
  • Actual application load times (in seconds), by employee or location and mapped over time
  • Actual employee logon times (in seconds) by location and mapped over time
  • Application usage, by version
  • Actual usage of Hardware (elapsed time, times most and least used and last logon details)
  • Actual software installed in the environment, with version numbers, active users, and time used

Introspectus data can be combined with commonly available metrics, such as your organisation’s:

  • IT labour costs
  • Hardware and software costs
  • Numbers of IT devices
  • FTE providing and using IT services
  • Service Provider costs
How does this affect my decisions?

As a decision maker within your organisation, Introspectus provides you with information to analyse:

  • To what extent your organisation’s applications are used over time:
    • How many people are actually using them
    • When they are being used
    • For how long
    • How intensively
    • What the actual cost of the software is per hour actually used, and not just the amount paid.
  • Track changes in IT workload over time that are associated with changes in your business and ICT environments
  • Which hardware and software to invest in or retire, based on actual usage trends over time
  • Usage trends for an application over time

How Introspectus Helps

Each agent compares the current patch list against what is actually installed on its device. Any gap between what has been released and what is deployed is immediately surfaced. Critically, Introspectus pays particular attention to the timing of patch deployment not just whether a patch is present, but when it was applied.

This temporal dimension is central to Essential Eight compliance, where the difference between a patch applied on day two versus day thirty can mean the difference between maturity levels, and between an environment that was protected and one that was exposed.

This combination of daily patch intelligence, severity-based filtering, agent-level validation, and deployment timing analysis gives organisations a real-time, evidence-based view of their operating system patch posture mapped directly to the ISM controls applicable to the Essential Eight patch operating systems strategy.

The Challenge with Patch Operating Systems

The visibility gap here is particularly consequential. A patch may be approved and scheduled, yet never successfully applied due to a failed deployment, a device that was offline during the maintenance window, a reboot that was deferred, or a system that exists outside managed channels entirely.

Organisations that rely solely on deployment tooling to confirm patch status are measuring intent, not reality. The ACSC is explicit on this point: organisations need to confirm patches have been applied successfully, not merely that they were dispatched.

Patch Operating Systems Overview

Within the Essential Eight framework, patching operating systems is a core and non-negotiable control. The ACSC sets clear expectations: patches for internet-facing infrastructure must be applied within 48 hours when identified as critical or where working exploits exist, and within two weeks for standard releases.

Patches for workstations, servers, and network devices must be applied within one month, with tighter timeframes applying in high-threat environments. Critically, the ACSC also mandates that vulnerability scanning occurs at least daily for internet-facing systems and at least fortnightly for workstations and non-internet-facing infrastructure not to replace patching, but to confirm it has actually occurred.

How Introspectus Works

From this inventory, Introspectus performs targeted web intelligence gathering. For each application identified, the platform locates the top five authoritative sources of patch and release information vendor security advisories, release notes, and vulnerability databases and retrieves that content into a central repository.

Aletheia, Introspectus’s AI analysis agent, then reads and analyses this content to extract the intelligence that matters for application patching: the latest available version, whether a release addresses a security vulnerability, the severity of that vulnerability, and all information relevant to the Essential Eight application patching requirements. This structured intelligence is mapped directly to the applicable ISM controls, producing defensible, audit-ready evidence of an organisation’s application patch compliance posture.

The Challenge with Patch Applications

A critical and frequently overlooked problem is the visibility gap. Organisations may believe their applications are current when, in reality, patches have silently failed, devices have missed deployment windows, or software has been installed outside of managed channels entirely.

Without continuous inspection at the endpoint level, these gaps go undetected until an audit or, worse, a breach.

Patch Applications Overview

Within the Essential Eight standard, patching applications is a dedicated and non-negotiable control. The ACSC specifies clear timeframes: critical vulnerabilities in internet-facing services must be addressed within 48 hours, commonly used applications such as office productivity suites, web browsers, email clients and PDF software must be patched within two weeks of release, and all other applications within one month.

For organisations in high-threat environments, the bar is higher still. Meeting these requirements consistently across hundreds of distinct applications deployed across thousands of endpoints is not achievable through manual effort alone.