How we use AI, and what we won’t do.
TagDrishti uses statistical models to detect tag-delivery failures on the websites our customers operate. This page tells you what those models do, what data they see, what they do not do, and the controls you can rely on.
We do not use customer data to train shared models. We do not aggregate across customers for inferences. AI does not make pricing, billing, or account-state decisions about you. Every alert produced by our engine includes the raw signal that triggered it — you can audit any decision the engine made.
1. What we use AI for.
Our anomaly-detection engine uses tenant-scoped statistical models to detect:
- Real-time tag delivery failures (a configured tag was supposed to fire to GA4, Meta, Ads, or your custom server-side endpoint and didn’t).
- Consent-violation patterns (a tag fired when consent was denied; a credential was used outside the consent envelope).
- Authentication failures on customer credentials (a server-side container can’t reach a destination because the API key rotated).
- Per-tag failure-rate spikes inside a single tenant’s baseline (≥3 failures in 15 minutes against a sustained sample of ≥10 events).
The engine is deterministic where possible (rule-based) and statistical only where rule-based detection produces unacceptable false-negative rates. The classification of every check as event_level or statistical is codified in the ALERT_POLICY constant in our backend source.
2. What we do not use AI for.
- We do not use AI or ML to set pricing, generate invoices, or change account state. Plan tier and event quotas are deterministic.
- We do not run generative-AI / large-language-model inference on customer data in any production code path today. If we add a customer-facing generative feature, it will ship behind an explicit opt-in and a separate disclosure on this page.
- We do not aggregate data across customers to train any shared model. Each tenant has its own baseline, computed from its own events, used only for that tenant’s alerts.
- We do not sell, license, or otherwise share AI-derived signals with any third party. The signals exist only inside your own tenant’s dashboard and notification channels.
3. What data the engine sees.
Tag-monitoring events captured from your own websites by the tagdrishti.js script. Events use pseudonymous session identifiers with a salt that rotates daily, satisfying the pseudonymisation bar in GDPR Recital 26 (more detail on /gdpr). PII parameters in URLs are stripped at ingestion; we do not see the email addresses, names, or phone numbers of your end-users unless they are encoded in custom event fields you configured. The engine does not see your account billing data, support conversations, or any other category of information classified as Confidential or Restricted on /privacy.
4. Six principles we hold ourselves to.
Customer data isolation.
Models are tenant-scoped. Your data shapes only your alerts. Cross-tenant aggregation requires explicit, written customer authorisation that we do not solicit.
Explainability.
Every alert produced by the engine carries the raw signal that fired it: event count, threshold value, time window, and the SQL query (or its semantic equivalent) that produced the decision. There are no opaque model outputs. If an alert fires, you can audit it.
Human in the loop for consequential decisions.
Pricing, billing, account suspension, and service availability are deterministic configuration. AI does not gate any of these. Any future feature that would let an AI signal change account state will require explicit human review.
Customer-controllable.
Customers can disable monitoring at any time. New AI-derived features ship with per-feature opt-out before they reach end-users. If you want a specific anomaly check off for your tenant, the env-var override path on ALERT_POLICY lets us flip it without a code change — documented internally and propagated within one minute.
Real evidence, not statistical guesses.
Our alert philosophy: TagDrishti alerts when a recorded session or event observed a real delivery failure to a destination platform. We do not alert because expected traffic didn’t arrive. Traffic-absence inferences (low volume, missing fires, “site looks down”) are dashboard signals, not paging signals. Codified at the top of the alert section in our backend source as the ALERT_POLICY map.
Updates announced ahead of time.
This policy is reviewed quarterly. Material changes are emailed to admins of active workspaces at least 30 days before they take effect, mirroring the cadence at /subprocessors. You may object during the window.
5. Third-party AI (if any).
Today, no third-party large-language-model or generative-AI service sits in our customer-facing data path. The anomaly engine is fully in-house. If we adopt a third-party model in future, that model becomes a sub-processor under the same regime as the rest of our stack: listed on /subprocessors, bound by an Article 28 DPA, with the 30-day customer objection window in force before any data flows.
We will, if we add such a model:
- Constrain the inputs and outputs at the API boundary so the model cannot see PII it doesn’t need.
- Verify the supply chain (provider certifications, licence terms, training-data provenance) before adoption.
- Test the deployed model for prompt-injection resilience, output-leakage, and behavioural drift on a fixed evaluation set, retaining the test results as audit evidence.
- Monitor for model performance degradation in production with the same telemetry pipeline that watches the rest of the platform.
6. Your rights.
- Access. The raw signal behind any alert is exportable from the dashboard. Detailed model logs available on request under your DPA.
- Correction / objection. If you believe an alert was incorrect, reply to the notification and we will retain the raw input for 30 days so the model decision can be re-evaluated.
- Deletion. The DSR erasure endpoint at
DELETE /dsr/erasepurges all model-input rows for your workspace from BigQuery. See /privacy §7. - Opt-out of statistical checks. Per-tenant override available on request; we will document and apply within one business day.
7. Standards we align to.
This policy is consistent with:
- NIST AI Risk Management Framework (AI RMF 1.0) — Govern, Map, Measure, Manage functions.
- EU AI Act (Regulation 2024/1689) — our system is not high-risk under Article 6 today; should that change with a future feature, we will publish the conformity assessment under Article 43.
- ISO/IEC 42001 (AI management systems) — targeted alongside SOC 2 Type 2 in 2027.
- OECD AI Principles — transparency, robustness, accountability.
8. Contact us.
Questions about this policy, model behaviour on a specific alert, or a request to opt a specific check on or off for your tenant: [email protected] with the subject “AI Policy: [your concern]”.
For data-subject rights specifically, the path is unchanged from /privacy §7 and /ccpa.