AI Policy
Client Funnels Ltd — Last Updated April 2026
1. Purpose and Scope
This policy sets out how Client Funnels uses artificial intelligence (AI) tools in the business.
It applies to:
- Employees
- Contractors
- Freelancers
- Agencies
- Technology partners
- Any third-party tools used on our behalf
For the purpose of this policy, AI includes generative AI, automation systems, machine learning tools, chatbots, and predictive tools.
AI governance is treated as a business-wide responsibility, not a single-tool issue.
2. Approved Uses
AI may be used for low-risk support tasks, including:
- Brainstorming ideas
- Drafting first versions of content
- Summarising notes or transcripts
- Internal research support
- Workflow automation
- Lead qualification support
- Chatbot assistance
AI outputs are always treated as drafts.
Human review is required before use in client-facing or public material.
3. Prohibited or Restricted Uses
The following are banned or tightly restricted:
- Making decisions about individuals solely through AI
- Generating legal, medical, financial, or HR advice without qualified review
- Uploading confidential client data into unapproved public tools
- Impersonation, deception, fake testimonials, or fabricated case studies
- Undisclosed synthetic media or deepfakes
- Discriminatory targeting or profiling
- Fully automating high-impact decisions
Where automated decision-making could have legal or significant effects on individuals, UK GDPR safeguards must apply.
4. Roles and Accountability
The Managing Director is responsible for:
- Approving AI tools
- Conducting risk reviews
- Ensuring privacy checks
- Ensuring security checks
- Overseeing output review standards
- Handling AI-related incidents
- Signing off public-facing AI-assisted content
AI systems do not hold decision-making authority. Humans do.
5. Data Protection and Privacy
Where personal data is involved, UK GDPR principles apply.
We will:
- Limit data entered into AI systems
- Avoid entering confidential client strategy into public tools
- Use anonymisation where possible
- Confirm lawful basis for processing
- Follow retention and deletion rules
- Review vendor terms regarding model training and data reuse
- Conduct a DPIA where processing presents high risk
Personal data may not be entered into unapproved AI tools.
6. Human Oversight
Human review is required for:
- Public-facing content
- Marketing claims
- Customer support messages
- Pricing or eligibility decisions
- Risk assessments
- Content that could affect reputation
AI may assist. Humans approve.
7. Accuracy and Validation
All AI outputs must be checked for:
- Factual accuracy
- Evidence for objective claims
- Bias or unfair language
- Misleading impressions
Marketing claims must comply with ASA standards, regardless of whether AI assisted in creation.
8. Transparency and Disclosure
Where customers interact with AI systems such as chatbots, this must be clear.
Disclosure is required where:
- It is not obvious the user is interacting with AI
- AI-generated media could mislead
- Synthetic voice or avatars are used
We will not disguise AI-generated content as something it is not.
9. Fairness and Bias Controls
AI must not:
- Create unfair discrimination
- Reinforce stereotypes
- Unfairly disadvantage protected groups
Audience targeting, segmentation, and lead scoring must be reviewed for fairness.
10. Security and Supplier Governance
Before adopting any AI tool, we review:
- Data security standards
- Access controls
- Breach procedures
- Vendor data terms
- Training and reuse provisions
- Location of processing
- Deletion rights
Suppliers must meet our data protection and security standards.
11. Intellectual Property
We will:
- Respect client IP
- Avoid using client content in public AI tools without consent
- Avoid passing off AI-generated work as human-created where this would mislead
- Take care with third-party assets, images, voices, or likeness
IP law in this area is evolving. Caution is required.
12. Training and Competence
Only authorised individuals may use AI tools for business purposes.
Users must:
- Understand this policy
- Complete basic AI governance training
- Escalate uncertain use cases
13. Incident Reporting
An AI incident includes:
- Data leaks
- Harmful or biased outputs
- Hallucinated legal or factual advice
- Unauthorised tool use
- Misleading marketing assets
- Customer complaints about AI interaction
Incidents must be reported to the Managing Director immediately.
14. Monitoring and Review
This policy will be reviewed quarterly.
New AI use cases must be documented and approved before launch.
