Beaton’s newly published AI Governance and Risk Policy outlines its approach to the rapidly evolving AI landscape. The policy prioritises client data privacy and outlines specific safeguards for various AI applications, aiming to balance innovation with responsible use. The move comes as professional services firms increasingly explore AI’s potential, though many are still in the early stages of adoption.
AI ethics and transparency: Beaton's four core principles
The policy is underpinned by four core principles: ensuring the privacy of data; transparently disclosing AI use; maintaining a human is in-the-loop at all times, ensuring all AI-generated outputs are reviewed by a subject matter expert; and maintaining accuracy by actively mitigating bias, hallucinations, and misinformation. Ewa Foroncewicz, leader of Beaton’s AI Project Group , said the firm wanted to be “completely transparent and proactive” in its approach to AI.
Beaton's AI Project Group: Governance and implementation
The dedicated internal AI Project Group oversees the implementation and ongoing development of the policy. This group meets weekly and includes key personnel: the executive chairman, the AI lead, the privacy officer, the IT manager, and the research lead. This cross-functional team is responsible for developing protocols, testing AI systems, providing guidance to employees, and ensuring compliance with the policy.
Foroncewicz describes the group as “absolutely essential” for “constantly learning, adapting, and making sure everyone at Beaton is using AI responsibly.” The policy is actively communicated across the organisation; every Beaton person “is expected to know it and ensure that their teams know, understand, and adhere,” Foroncewicz said.
Beaton’s AI policy governs how it works with the data of professional services firms.
AI applications and safeguards: Practical use cases
The policy details four primary use cases for AI within Beaton, each with tailored safeguards:
1. Everyday operations
AI tools may be used to review and improve written communication, enhancing clarity and efficiency. Safeguards include mandatory human review of all AI-generated text before external use.
2. Marketing material creation
AI can assist in generating content for blogs, articles, and videos. Examples of such content include this very blog article, which was put together with significant input from AI under human supervision. Human oversight is critical here, with subject matter experts reviewing and editing all AI-generated content to ensure accuracy and appropriateness.
3. Verbatim data analysis
AI can be used to analyse existing qualitative data, such as open-ended comments in surveys, to identify key themes and trends. Safeguards include de-identification of data before AI processing and rigorous validation of AI-generated insights by researchers. For example, a researcher always reviews AI-identified themes and checks them against the original source data.
4. Qualitative in-depth interviews
AI can assist in conducting and analysing qualitative interviews with clients and firms’ internal stakeholders. Strict protocols are in place to ensure informed consent, data privacy, and the accuracy of any AI-driven analysis.
Client benefits: Leveraging AI for deeper market research insights
Beaton emphasises the benefits of its AI approach for firms. By leveraging its extensive longitudinal dataset of client feedback in professional services, Beaton can train AI models on real-world, B2B-specific language. This leads to more accurate and relevant insights compared to models trained primarily on consumer data. The policy enables faster analysis of qualitative data, providing clients with quicker turnaround times and more comprehensive understanding of feedback. Foroncewicz highlights the potential to “unlock even deeper insights from our data,” identifying trends “with greater speed and precision.”
Beaton’s AI policy is a living document, designed to be updated as technology improves and AI use becomes ubiquitous.
Data privacy and security in AI: Beaton's commitment
The policy reiterates Beaton’s commitment to complying with Australian and New Zealand privacy laws, as well as GDPR. Robust cybersecurity protocols are in place to protect all data, and the proactive minimisation of identifying information before AI processing adds an extra layer of security. Foroncewicz states that “data privacy is absolutely paramount – it’s our number one priority.”
Beaton acknowledges the dynamic nature of AI and states that its policy will evolve accordingly. The AI Project Group will continuously monitor technological advancements and emerging risks, updating protocols as needed. Foroncewicz emphasises that because AI is changing rapidly, “our policy has to be a living document.”
The future of AI in professional services: Growing safely
The group meets weekly to stay abreast of developments. Foroncewicz states that Beaton researchers are “trained to spot potential biases” and critically evaluate AI outputs, and that “the diversity in our AI Project Group helps us spot things we may not otherwise alone.” She hopes that Beaton’s use of AI will allow the professional services sector “to serve their clients even better”.
The full AI policy is available for review on Beaton’s website. The firm encourages interested parties to examine the policy and welcomes inquiries about its AI approach and potential applications for client data.
The content of this page was written with the assistance of AI, based on inputs from a human. A subject-matter expert reviewed all output for privacy, accuracy and validity before publication. This is being disclosed in accordance with Beaton’s AI Policy.
Get insights and market trends delivered to your inbox
Sign up to our monthly newsletter and get the latest updates and insights from Beaton’s research.