This policy is based on a risk management foundation to ensure all Beaton people consistently identify, assess and respond to key AI-related risks across Beaton’s research, operations, compliance, strategy, reputation and financial management.
This policy is designed to govern our actions, give our clients confidence about how we handle data, and outline our expectations of suppliers of the AI systems we use.
OUR MISSION IS THE RESPONSIBLE AND ETHICAL USE OF AI
At Beaton, we use AI to enhance our expertise without replacing it. We are guided by industry best practice and peak body advice such as the Australian Government’s Voluntary AI Safety Standards. We are committed to innovation that ultimately benefits our clients. We stay updated with AI technology to ensure we maintain the security and confidentiality of the data we own and hold. As a leader in the professional services ecosystem, we aim to set high standards and acknowledge the broader societal impact of AI. We strive to contribute positively while minimising potential harm or bias. This is our promise.
OUR PRINCIPLES
Our commitments and protocols are grouped into four AI principles. These principles underpin how we currently use AI and will guide our approach in a fast-evolving landscape when faced with innovations and novel use cases.
![](https://sp-ao.shortpixel.ai/client/to_auto,q_lossy,ret_img,w_256,h_256/https://beatonglobal.com/wp-content/uploads/2025/02/ai-policy-privacy.png)
![](https://sp-ao.shortpixel.ai/client/to_auto,q_lossy,ret_img,w_256,h_256/https://beatonglobal.com/wp-content/uploads/2025/02/ai-policy-privacy.png)
Privacy by design
Where AI is used to analyse personal information, we adhere to privacy processes that protect the confidentiality of this data, including, where relevant, human review. These processes include the masking of identifying information such as names (also known as pseudonymisation) and providing only the minimum necessary data to perform the analysis (also known as data minimisation). To the extent possible, we do not allow our data to become training data for any third party. Our use of external AI suppliers is done in accordance with Beaton’s Privacy Policy.
![](https://sp-ao.shortpixel.ai/client/to_auto,q_lossy,ret_img,w_256,h_256/https://beatonglobal.com/wp-content/uploads/2025/02/ai-policy-human-in-the-loop.png)
![](https://sp-ao.shortpixel.ai/client/to_auto,q_lossy,ret_img,w_256,h_256/https://beatonglobal.com/wp-content/uploads/2025/02/ai-policy-human-in-the-loop.png)
![](https://sp-ao.shortpixel.ai/client/to_auto,q_lossy,ret_img,w_256,h_256/https://beatonglobal.com/wp-content/uploads/2025/02/ai-policy-human-in-the-loop.png)
Humans in-the-loop, always
We ensure that humans are always involved when using AI systems. No data is uploaded automatically without someone knowing about it, and we never publish AI-generated work without a subject-matter expert’s oversight and review. We provide training and tools to help our team understand the limitations of AI and how they can generate better outputs. Our team and AI cooperate and complement each other: AI handles mainly repetitive tasks, while humans provide context, empathy, critical thinking, relevance to our client’s situation and our understanding of our client’s business.
![](https://sp-ao.shortpixel.ai/client/to_auto,q_lossy,ret_img,w_256,h_256/https://beatonglobal.com/wp-content/uploads/2025/02/ai-policy-fact.png)
![](https://sp-ao.shortpixel.ai/client/to_auto,q_lossy,ret_img,w_256,h_256/https://beatonglobal.com/wp-content/uploads/2025/02/ai-policy-fact.png)
![](https://sp-ao.shortpixel.ai/client/to_auto,q_lossy,ret_img,w_256,h_256/https://beatonglobal.com/wp-content/uploads/2025/02/ai-policy-fact.png)
Facts, not fiction
As a market research consultancy, ensuring our work is accurate has long been and will continue to be a core part of how Beaton works. We bring this experience and vigilance to our AI work to minimise the occurrence of hallucinations, bias and misinformation. We ensure all AI outputs are reviewed by subject-matter experts and rigorously test AI systems before deployment. We validate our prompt protocols for reliability and replicability, and to minimise false negatives.
![](https://sp-ao.shortpixel.ai/client/to_auto,q_lossy,ret_img,w_256,h_256/https://beatonglobal.com/wp-content/uploads/2025/02/ai-policy-transparency.png)
![](https://sp-ao.shortpixel.ai/client/to_auto,q_lossy,ret_img,w_256,h_256/https://beatonglobal.com/wp-content/uploads/2025/02/ai-policy-transparency.png)
![](https://sp-ao.shortpixel.ai/client/to_auto,q_lossy,ret_img,w_256,h_256/https://beatonglobal.com/wp-content/uploads/2025/02/ai-policy-transparency.png)
Transparency, no exceptions
We will disclose our use of AI whenever an AI system has been used to perform data analysis and where AI has been materially used to generate content (e.g. blog articles). In doing so, we provide a link to this policy. We openly report relevant AI limitations, such as performance issues with underrepresented data groups. We ensure that stakeholders understand the extent of AI involvement. Our clients can opt-out of the use of AI when performing data analysis as part of their projects.
GOVERNANCE
Beaton’s AI Project Group Lead oversees the execution of this policy. It is supported by the other group members, including the Executive Chairman, Privacy Officer, IT Manager and Research Lead as permanent members.
The AI Project Group ensures this Policy and our use of AI are under constant supervision. This dynamic document aims to keep up with the latest AI innovations. The AI team meets regularly and sends internal guidance monthly. We encourage Beaton people to explore different AI systems in low-risk scenarios and share their knowledge widely. The group is responsible for up-skilling and AI learning for Beaton people.
The AI Project Group’s main governance actions include:
- Updating this AI Policy,
- Testing AI systems for competence and bias,
- Maintaining relevant records,
- Ensuring we are acting in accordance with Beaton’s Privacy Policy,
- Assessing risks of AI use cases,
- Providing best practice guidance internally, and
- Ensuring our people use Beaton-authorised AI systems in their Beaton capacity.
TERMS USED IN THIS POLICY
AI suppliers: Third party providers and developers of AI systems. AI suppliers may provide multiple AI systems (e.g. Google being the developer for Gemini, NotebookLM and Veo).
AI systems: A software framework that simulates intelligent behaviour to perform tasks requiring human-like reasoning, learning, problem-solving, perception or interaction. An AI system may be very broad and able to perform a range of tasks (e.g. ChatGPT) or narrow and designed to perform very specific tasks (e.g. OpusClip).
Australian Government’s Voluntary AI Safety Standards: A set of 10 guardrails the Australian Government has developed to provide businesses with guidance to benefit from AI systems while mitigating and managing associated risks.
Beaton people: Directors, employees, and contractors who present themselves as being part of Beaton (e.g. appear on the Beaton website).
Bias: Where the output of an AI system is unduly influenced by the quality of the underlying data, such as the under- and over-representativeness of different data sources.
Hallucinations: Where the output of an AI system contains factual errors, including the fabrication of underlying data to justify its output.
Misinformation: Where inaccurate output from an AI system, whether from bias, hallucinations or other means, is communicated to others and / or used for decision-making.
Personal information: This has the same meaning as is used in Australian Privacy Law, i.e. any data that can be used to identify an individual. The most common examples of this in Beaton data include name, email, organisation name and the firms they have used.
EXAMPLES OF HOW WE USE AI AT BEATON
WORK TYPE | EXAMPLES OF USE CASES | RISK LEVEL | RISK MINIMISATION PROTOCOL/S | AUTHORISED AI SYSTEM/S |
---|---|---|---|---|
Operations |
|
Low |
|
|
Marketing | Drafting:
|
Low |
|
|
Data analysis |
|
Medium |
|
|
Research |
|
Medium |
|
|