Artificial Intelligence Policy

Effective from:
30 April 2026
Approved by:
Chief Operating Officer

At a glance

This policy establishes principles and requirements for the ethical, responsible, and effective use of artificial intelligence (AI) tools and systems across Five Good Friends

Scope

This policy applies to:

  • All employees, contractors, volunteers, and board directors of Five Good Friends.
  • All AI technologies that Five Good Friends develops, procures, configures or uses, including AI embedded within third-party software and cloud-based platforms.

Purpose

This policy establishes principles and requirements for the ethical, responsible, and effective use of AI tools and systems across our organisation. It reflects our commitment to using technology in ways that support our mission, uphold trust with customers and partners, and align with our values and legal obligations.

Specifically, this policy aims to:

  • Protect the rights, dignity, and interests of our Members, employees, contractors, and partners.
  • Support the use of AI to enhance service delivery, decision-making, and operational efficiency.
  • Ensure transparency and accountability in the way AI is used within Five Good Friends.
  • Provide a clear framework for identifying, assessing, and managing risks associated with AI systems.
  • Empower our people to use AI tools thoughtfully and safely to focus on higher-value work.

For the purposes of this policy, an AI system is any technology that uses data to make inferences and generate outputs, such as transcriptions, predictions, recommendations, content, or decisions with a degree of autonomy.

This includes, but is not limited to:

  • Large Language Models (LLMs) and generative AI tools
  • Machine learning models
  • Predictive analytics systems
  • Chatbots that generate their own responses

This policy does not apply to:

  • Standard spreadsheet formulas
  • Rule-based automations (for example, simple “if–then” logic)
  • Traditional business intelligence dashboards without AI inference

If there is uncertainty about whether a tool or system falls under this policy, advice should be sought from the AI Governance Committee.

Policy statements

Employees and contractors may make use of approved large language model (LLM) powered systems and AI-enabled features to assist or augment their work. This may include tasks such as drafting content, analysing information, troubleshooting issues, or automating routine activities.

Used well, AI tools can help us improve the quality, consistency, and speed with which we achieve our objectives. Five Good Friends encourages their thoughtful use where they reduce friction, improve outcomes, or allow people to focus on higher-value and more human-centred work.

Used without judgment and care, AI tools can introduce inaccurate information, privacy and security risks, bias, legal exposure, or uncontrolled costs. For these reasons:

  • AI tools must not be used in ways that compromise the handling of personal or sensitive customer, employee, or partner data.
  • AI tools must not be used in ways that could mislead individuals or groups, automate unsafe decisions, or damage the reputation of Five Good Friends.

Approved and sanctioned AI tools

Employees may only use company‑sanctioned AI tools for work purposes. Approved tools are those:

  • Explicitly listed on the AI Tools and Systems at Five Good Friends page, or
  • Embedded AI assistants or AI features that form part of already approved software platforms (for example, productivity, CRM, or care management systems that have been through security, privacy, and vendor review).

Helpers may only use the company-sanctioned AI tool embedded in Lookout.

The use of personal accounts, free trials, browser plug‑ins, or unapproved AI tools for work-related tasks is not permitted.

Requesting approval for new AI tools

If an employee or contractor believes a new AI tool would provide business value, a request must be submitted to the AI Governance Committee prior to any approval or use.

All requests are reviewed by the AI Governance Committee to ensure:

  • Compliance with all applicable legal, regulatory, privacy, and security requirements
  • Alignment with Five Good Friends’ values, risk appetite, and strategic objectives
  • Clear benefits to our Members, Helpers, partners, and team
  • Appropriate controls relating to data handling, cost, accuracy, and human oversight

No new AI tool may be purchased, configured, trialled, or used for work purposes until it has been reviewed and approved through this process.

Five Good Friends maintains a curated list of approved AI tools along with guidance for their responsible and secure use. This list, and this policy, will be reviewed regularly as technology evolves, new risks emerge, and feedback is received.

AI use involving health and care records

High‑risk use of AI

The use of AI tools in connection with health information, care records, clinical notes, care plans, assessments, or other sensitive care data is classified as high risk at Five Good Friends.

Health information is subject to heightened legal, regulatory, and ethical requirements.

Approval requirements

AI tools must not be used to create, analyse, summarise, transcribe, extract insights from, or otherwise process health or care records unless all of the following conditions are met:

  1. The AI tool is a company‑sanctioned AI tool listed on the AI Tools and Systems at Five Good Friends page or embedded within an already‑approved software platform; and
  2. The specific use case involving health or care data has been explicitly reviewed and approved by the AI Governance Committee; and
  3. The use is for the primary purpose of supporting the delivery of care and falls within reasonable expectations of the individual whose information is being processed; and
  4. All applicable privacy, data protection, security, and regulatory requirements have been satisfied.

Approval of an AI tool does not constitute approval to use that tool with health or care records.

Consent and data handling

Additional requirements apply where AI is used with health or care data:

  • Explicit consent is required where:
    • A third‑party AI provider has access (directly or indirectly) to health information;
    • Health information is processed or stored outside Australia;
    • Health information is used to train or improve AI models.
  • AI tools must not retain, reuse, or repurpose health information beyond the approved use case.
  • Where information is sourced from external systems (including government or health record systems), additional legislative obligations apply.

Human oversight and limitations

  • AI outputs involving health or care information are assistive only.
  • AI must not replace clinical, professional, or care judgement.
  • All AI‑generated summaries, insights, or outputs must be reviewed, verified, and approved by a suitably qualified human before being relied upon, recorded, or acted upon.
  • Five Good Friends does not guarantee the accuracy, completeness, or suitability of AI outputs for clinical or care decision‑making.

Prohibited uses

AI tools must not be used to:

  • Make autonomous clinical, medical, or care decisions;
  • Provide diagnoses, treatment recommendations, or risk determinations without human review;
  • Circumvent regulatory obligations relating to health records;
  • Process health information in ways that have not been explicitly approved.

Any uncertainty regarding permitted use must be escalated to the AI Governance Committee before the AI tool is used.

Five Good Friends maintains a curated list of approved AI tools along with guidance for their responsible and secure use. This list, and this policy, will be reviewed regularly as technology evolves, new risks emerge, and feedback is received.

Responsible use guidelines

The use of AI tools at Five Good Friends is subject to our existing policies and governance frameworks, including the following:

Policy How it applies
Code of Conduct All use of AI must reflect our core values and be undertaken with professionalism, integrity, respect, and care.
Safe-to-share screenshot policy Screenshots, prompts, or outputs shared with AI tools must comply with information security and data classification requirements.
Acceptable Use Policy AI tools may only be used for legitimate business purposes (e.g. drafting documents, summarising text, analysing information). It is unacceptable to use AI where prohibited under this policy, including sharing or pasting personal or sensitive customer data into AI systems.
Cyber Security Incident Response Plan Any suspected misuse, data exposure, policy breach, or AI-related incident must be reported to Five Good Friends in accordance with the Cyber Security Incident Response Plan (employees) or the Privacy Policy (all other stakeholders).
Privacy Policy Personal information must only be collected, used, or disclosed in accordance with privacy obligations and regulatory requirements.
Third-Party Management Policy AI tools will not be approved or used until they have been vetted for security, confidentiality, privacy, and legal risk.

Principles

Five Good Friends follows the recommended Australian AI Ethics Principles. These principles apply to all AI systems and AI-enabled tools used by Five Good Friend

1. Ethical and human-centred use

AI systems must align with our values, respect human dignity, and support, not replace, human judgement. AI must not be used to deceive, manipulate, or unfairly disadvantage individuals or groups.

All AI use should be consistent with Australia’s AI Ethics Principles and contribute positively to our mission of care, connection, and trust.

2. Clear accountability

Each AI system or approved AI tool must have an accountable owner who understands its purpose, limitations, and risks and is responsible for its appropriate use and outcomes.

Where AI involves third-party vendors or partners, roles and responsibilities must be clearly documented.

3. Risk and impact assessment

AI systems must undergo a proportionate risk and impact assessment before implementation. Higher-risk use cases require enhanced oversight and approval.

Where AI may impact customers or vulnerable groups, additional care must be taken to understand and mitigate potential harms.

4. Quality, reliability, and security

AI systems must be tested before use and monitored over time to ensure reliable performance, accuracy, and security. Privacy, cybersecurity, and data protection controls apply equally to AI systems.

5. Fairness and inclusion

AI systems must not result in unlawful discrimination or unfair treatment. Their design and use should reinforce Five Good Friends’ commitment to inclusion, accessibility, and diversity.

6. Transparency and contestability

Where appropriate, customers and stakeholders should be informed when AI is used in impactful ways. AI-assisted decisions must be explainable, and mechanisms should exist to challenge or review outcomes.

7. Human oversight and control

Humans must retain meaningful oversight of AI systems. Users are responsible for reviewing AI outputs before they are relied upon, used externally, or acted upon.

Critical processes must have manual alternatives, and AI systems must be capable of being paused or disabled where necessary.

Governance and compliance

Roles and responsibilities

  • Policy Owner: Senior leader accountable for AI governance and policy maintenance.
  • Policy Approver: Executive team or Board with authority to approve this policy and material changes.
  • Compliance Monitor: AI Governance Committee, Quality and ICT Teams
  • AI System Owner: Individual accountable for a specific AI system or tool across its lifecycle.
  • All staff and contractors: Responsible for complying with this policy, completing required training, and reporting concerns.

New AI use cases

All new AI use cases must be screened to determine their risk level (e.g. normal, elevated, or prohibited) and the required level of approval and oversight. This activity will be overseen by the AI Governance Committee.

Incident management

Any AI-related incidents, data exposures, or policy breaches must be reported promptly and managed in accordance with our ‣. Five Good Friends maintains the ability to suspend or disable AI systems where required.

The AI Governance Committee will review incidents related to AI tools and systems and their use.

References and related documents

Five Good Friends

External

  • Australia’s AI Ethics Principles
  • Guidance for AI Adoption (Australian Government)

TABLE OF CONTENTS

Close X Icon

Let’s get you the right support.
What do you need?

Support for myself
Arrow Right Icon  
Support for someone else
Arrow Right Icon