Artificial Intelligence
Our AI Policy
updated January 2026
Mangrove Web sometimes uses artificial intelligence in our work. We use AI intentionally, with a human always in the loop, and only when it improves the quality, clarity, and integrity of what we deliver. We strive to continually review and improve our practices, protect privacy and confidentiality, be transparent when AI meaningfully contributes to a project, and consider environmental and social impacts.
Mangrove’s responsible use of artificial intelligence
At Mangrove, we build websites and digital communications for organizations working toward a more equitable and sustainable world. As a Certified B Corporation, we use technology in service of people, planet, and shared prosperity.
Artificial intelligence (AI) is one of many tools that we use to support that mission — and our intent is to use it thoughtfully. We approach AI with curiosity, restraint, and respect: as a technology that can expand imagination and craftsmanship, not replace them.
Our guiding principle is clear: AI should enhance the quality and integrity of our work — in design, code, and communication — more than its speed.
1. Why we use AI
Mangrove uses AI when it demonstrably improves outcomes for our team and our clients and their audiences. This may include clarifying or conducting research, writing code more efficiently and accurately, testing accessibility, or refining written and visual content to improve clarity and consistency.
AI is neither automatic nor assumed. Its use is intentional and led by individuals on our team. We strongly believe that there must always be a human in the loop to ensure the quality and accuracy of our work.
We do not want to become machines, nor do we want to deliver trite or templated work. We are wary of AI and automation’s influence over us personally, on our culture, and in our industry.
At the same time, AI is not going away. Our job at Mangrove is to use it responsibly and in the best interests of our clients, who compensate us to make many expert decisions on their behalf.
2. How this policy was created
This document was written by the Mangrove team with the support of ChatGPT as part of our internal AI research. The LLM was trained in understanding our service offerings and values as well as the way that we write and use language. It was then extensively reviewed, edited, and approved by the Mangrove leadership team.
We include that disclosure deliberately. If we expect transparency from others, we believe we should model it ourselves first.
3. Guiding principles
Human-led, always. AI may assist with ideation, research, drafting, or prototyping, but final decisions rest with individual team members.
Quality before efficiency. AI is useful only when it raises the caliber of our work. If AI reduces clarity, inclusivity, or craft, we will not use it. We will carefully review for any hallucinations as well as racism, sexism and other discrimination that emerges with usage.
Transparency. When AI significantly contributes to a deliverable, we will let our clients know. Disclosure builds trust. We will not build our products with AI alone; however, we may use it as a tool in the building process.
Privacy and confidentiality. We protect client information. Only approved, secure, enterprise-grade systems are used to protect internal and client material. No private data, proprietary code, or contracts are entered into public AI tools.
Originality and copyright. We respect intellectual property and will not prompt AI to imitate identifiable creators or reuse copyrighted material. AI outputs are treated as drafts, reviewed for accuracy and authenticity before presentation or publication.
Balance benefit and harm. AI brings real costs — economic, cultural and social. We will do our best to weigh those costs before using it. If the value is uncertain or perceived to be detrimental to a client or project, we will not proceed.
4. Environmental and social responsibility
AI has an environmental footprint. The electricity required to power large-scale models is growing every day. As a B Corp, Mangrove takes these impacts seriously and will take them into account when we use AI platforms.
We also recognize the broader social implications: AI can shift creative work, create layoffs, reduce job opportunities, and amplify bias. We will work to address these risks by:
- keeping human creativity, empathy, and cultural awareness at the center;
- reviewing outputs for fairness and inclusion; and
- engaging with our peers in open dialogue about responsible practice and AI.
5. Integrity and professional conduct
Mangrove upholds professional ethics consistent with our membership in the Association of Registered Graphic Designers (RGD). As members and supporters of RGD, we follow the organization’s Code of Ethics and its policy statement on AI-generation and copyright.
This means:
- preserving human authorship and creative accountability;
- crediting creators and disclosing meaningful AI assistance;
- respecting copyright and avoiding unlicensed or derivative content;
- maintaining honesty, empathy, and professionalism in every project.
These principles guide our work as designers, developers, and communicators — and help ensure that AI enhances the field of communications rather than diminishing it.
6. Humility and continuous learning
AI is evolving faster than any single policy can contain. We will make mistakes. We might overtrust a tool, or simply underestimate its limitations. What matters is how we respond — by acknowledging our missteps, correcting them, and sharing what we learn.
We are also honest about our unease. We conducted an internal survey in October 2025, and found that many people on the team worry about AI, despite their use of it. AI introduces tremendous uncertainty: around authorship, data privacy, creative livelihoods, and environmental costs.
Mangrove shares these concerns with many in our B Corp community. In our use of AI, we recognize our fears and hopes. Our commitment is to face our concerns directly, to learn openly, and to keep human well-being at the centre of every technological decision and discovery.
7. Collaboration and trust
Responsible AI use is a shared responsibility. We collaborate with clients, partners, and peers to determine when and how AI adds genuine value. We will provide secure infrastructure, ongoing training, and ethical guidance to team members. Our process remains transparent:
- AI use is documented in project records.
- All AI-assisted work is reviewed by qualified professionals.
- If it is significant, we will divulge to clients how AI helped us in our work.
- Clients may ask how AI was used at any stage of a project.
We hope that this openness will ensure that our client partnerships are built on ongoing trust, clarity, and respect.
8. Our Commitment Going Forward
Mangrove will continue to explore AI carefully, grounded in the same principles that guide all of our work:
- Excellence and accessibility in every product and experience.
- Environmental and social responsibility in every decision.
- Honesty and humility in how we communicate and learn.
- Respect for human creativity as the heart of our craft.
We are learning as we go, alongside our B Corp peers, our clients, and our colleagues. We may not always get AI adoption right — but we will act in good faith, be transparent about what we are learning, and adapt as the technology landscape evolves.
Technology will continue to change but our values will not.
Thinking about a project?
Start a project. Ask for advice. Say hello.
Mangrove respects the GDPR and will never share your information without consent. Read our privacy policy for more on how we handle your personal information.