Our AI Use Policy

We build and deploy AI with transparency, human oversight, and a focus on safety and privacy. This page explains how Mariam Tech uses AI, the safeguards we apply, and how clients can request more information or opt out.

Last updated:

Key commitments:

  • Transparent labeling
  • Human-in-the-loop review
  • Strict data controls

Transparency

When Mariam Tech delivers AI-driven features (chatbots, document assistants, automation), we clearly label AI-generated content and make best-effort citations or references where applicable. We aim to let users know when a response, summary, or draft was produced by a model rather than a human.

Responsible use

We design and tune AI systems to reduce harmful outcomes and limit bias. Practices include guardrails, periodic bias tests, and avoiding misleading guarantees about business outcomes.

Privacy & data handling

Client and user data are treated as confidential. We adhere to the following principles:

  • Purpose limitation: only collect data necessary to provide the service.
  • Consent & control: clients can opt in/out of model fine-tuning or analytics.
  • Anonymization: pseudonymize/anonymize data where required.

Human oversight & limitations

AI helps with drafting, triage, and automation but is not infallible. We therefore:

  • Human-in-the-loop: critical decisions (legal, medical, financial) are reviewed by humans.
  • Known limitations: models can hallucinate or reflect training gaps.
  • Error reporting: clients can report issues and request investigations.

Security & retention

Security controls include encryption in transit/at rest, role-based access, and retention policies with delete/export on request.

Client requests & contact

Clients can manage data and disclosures:

Change log

  • 2025-09-01 — Clarified consent language for training data.
  • 2024-12-15 — Added section on human oversight and bias testing cadence.