AI Transparency & Disclosure
Heurista is an AI-augmented research platform. We believe researchers deserve full transparency about how AI is used in their work. This document explains every AI-powered feature, how your data is handled, and the safeguards we maintain.
Effective March 16, 2026
Our Commitment to AI Transparency
Research integrity depends on knowing exactly what tools shaped your analysis. When AI is involved in any part of the research process, you have a right to understand what it does, what it does not do, and where human judgment remains essential.
This disclosure exists because we believe AI transparency is not optional in research contexts. It is a prerequisite for trust, reproducibility, and ethical practice. We publish this document voluntarily and update it whenever our AI capabilities change.
We are committed to compliance with the EU AI Act's transparency obligations, GDPR requirements for automated processing, and emerging standards for responsible AI in research. Where regulations are still developing, we default to the most transparent option available.
How We Use AI
The following is a complete list of features in Heurista that use artificial intelligence. Each description explains what the AI does, what it does not do, and how human oversight is maintained.
Qualitative Coding Suggestions
AI analyzes response text alongside your existing codebook and suggests thematic codes that may apply. Each suggestion includes the relevant text passage and a confidence indicator.
Human oversight: AI does not auto-apply codes. Every suggestion is queued for your review. You accept, modify, or reject each one individually. Your codebook remains entirely under your control.
AI Conversational Interviews
During survey interviews, AI generates contextual follow-up questions to probe deeper into participant responses. It follows your interview guide, respects your defined rules and boundaries, and adapts its tone to the conversation.
Human oversight: You define the interview guide, rules, and constraints before any conversation begins. All AI-generated questions and participant responses are stored and fully attributable in your data.
AI-Generated Reports & Findings
Heurista uses a multi-pass AI synthesis process to generate findings reports from your data. Reports include identified themes, supporting evidence with citations to source responses, and actionable recommendations.
Human oversight: All findings must be reviewed by the researcher before they can be exported or published. AI findings are clearly labeled as AI-generated throughout the interface. You can edit, remove, or add to any finding.
Natural Language Data Querying (Copilot)
Ask questions about your data in plain language. The copilot interprets your question, queries your dataset, and returns structured answers with references to the specific responses that informed the answer.
Human oversight: Every answer includes source references so you can verify the AI's interpretation against your primary data. The copilot does not modify your data.
Hue AI Assistant
Hue is an in-app AI assistant that helps you navigate the platform, understand your data patterns, and get methodological guidance. It can answer questions about Heurista's features, suggest analytical approaches, and help interpret results.
Human oversight: Hue provides guidance and suggestions only. It cannot modify your data, apply codes, or change your analysis without your explicit action.
Batch Coding
AI processes multiple responses simultaneously to generate code suggestions at scale. This feature uses a faster AI model optimized for high-volume processing to deliver results efficiently.
Human oversight: All batch-generated suggestions are queued for your review, identical to individual coding suggestions. No codes are applied until you approve them.
Document Analysis
AI extracts and analyzes text from uploaded documents including PDFs, transcripts, and other text-based files. Extracted text is stored within your project for analysis alongside survey data.
Human oversight: You review extracted text for accuracy before it enters your analysis. The original documents are preserved unchanged.
Text-to-Speech
Neural voice synthesis reads survey questions aloud in 91 languages, making surveys accessible to participants with varying literacy levels. This uses the Microsoft Edge Text-to-Speech service.
How it works: Question text is sent to the TTS service, which returns synthesized audio that plays in the participant's browser. Audio is not stored server-side. This feature does not analyze or interpret your data.
AI Models We Use
Heurista's AI features are powered by large language models accessed through commercial APIs. By default, the platform uses Anthropic's Claude models. Organizations may configure alternative AI providers based on their data governance requirements. We do not train custom models or fine-tune existing ones on your data. We maintain strict data processing agreements with our AI providers governing how your data is handled.
Advanced AI Model
Used for complex analytical tasks: findings synthesis, report generation, nuanced qualitative analysis, and conversational interviews. This model excels at understanding context, following detailed instructions, and producing structured analytical outputs.
Standard AI Model
Used for high-volume tasks: batch coding suggestions, quick chat interactions, and routine classification. This model is optimized for speed and efficiency while maintaining quality appropriate for suggestion-based workflows.
How Your Data Interacts with AI
What data is sent to AI services
When you use an AI feature, Heurista sends only the minimum context required for that specific task. This typically includes survey question text, response text relevant to the analysis, your codebook or interview guide, and any instructions you provide. Each request is scoped to the immediate task — we do not send your entire dataset when only a subset is needed.
What is never sent
Your account credentials, passwords, billing information, payment details, and other users' data are never included in AI processing requests. Personal identifiers are not required for AI analysis and are excluded from requests wherever technically feasible.
Personal identifiers in response data
If research participants include personal identifiers in open-text responses, that text may be sent to our AI provider as part of normal AI processing. We recommend anonymizing or pseudonymizing response data before using AI features on datasets containing sensitive personal information.
Data retention and processing
Under Anthropic's commercial API terms, inputs and outputs are not used for model training. Anthropic may retain data for limited periods for safety monitoring, abuse detection, or compliance with legal obligations. For current retention details, refer to Anthropic's API data retention policy.
No model training on your data
Your research data is never used to train, fine-tune, or improve AI models. This commitment is reflected in our data processing agreements with our AI provider. Your data exists to serve your research, not to improve commercial AI systems.
Data minimization
We follow a principle of data minimization for all AI interactions. Only the minimum context needed for each specific task is included in requests. For example, a coding suggestion for a single response sends only that response and relevant codebook entries — not your entire dataset.
Cross-border data transfers
When research data is processed by our AI provider, it may be transferred to servers in the United States. This transfer is governed by Standard Contractual Clauses and our Data Processing Agreement. For full details, see our Privacy Policy.
Configurable AI provider
Heurista's architecture supports multiple AI providers. Organizations can configure which AI service processes their data based on their own data governance policies. The data minimization principles described above apply regardless of which provider is configured.
Enterprise: Private Cloud AI
For organizations that require full data sovereignty, Heurista offers private cloud deployment where AI processing runs entirely within your own cloud infrastructure. With this configuration, no research data leaves your environment.
Your infrastructure, your data
AI models run within your own AWS (via Amazon Bedrock) or Google Cloud (via Vertex AI) account. Research data is processed in your chosen region and never transmitted to a third-party API.
Same quality, full control
Private deployments use the same AI models and analytical pipelines as the standard platform. There is no difference in analysis quality — only in where the processing occurs.
Regional compliance
Choose the cloud region that meets your regulatory requirements — EU, US, or other supported regions. Combined with Heurista’s EU-hosted database, this enables fully region-contained data processing.
Private cloud deployment is available as part of Heurista's Enterprise plan. Contact enterprise@heurista.app to discuss your requirements.
Human Oversight Architecture
Heurista is built on a human-in-the-loop architecture. AI produces suggestions; humans make decisions. This is not a policy choice layered on top of automation — it is how the system is designed at every level.
All AI outputs are suggestions, not decisions
No AI output in Heurista is automatically applied to your research. Every code, finding, and recommendation enters a review queue where you make the final call.
Researcher approval required
Before any AI-generated output becomes part of your published findings, you must explicitly review and approve it. This applies to coding suggestions, findings reports, and all analytical outputs.
Full audit trail
Heurista maintains a coding audit trail that distinguishes between AI-suggested codes and human-applied codes. You can see exactly which elements of your analysis involved AI assistance at any time.
Confidence indicators
Where applicable, AI outputs include confidence scores so you can prioritize your review. Lower-confidence suggestions warrant closer scrutiny.
Override and reject capability
You can override, modify, or reject any AI suggestion at any point. The system is designed to make correction easy, not to push you toward accepting AI outputs.
Limitations & Known Risks
AI is a powerful tool, but it has real limitations that researchers must understand. We document these openly so you can make informed decisions about how and when to use AI assistance.
AI may produce outputs that are inaccurate, incomplete, or misleading. Always verify AI-generated findings against your primary data.
AI may not capture cultural nuances, local context, or domain-specific meaning that a human researcher would recognize. This is especially important in cross-cultural research.
AI outputs may reflect biases present in the model's training data. Be alert to patterns that seem to favor certain perspectives or overlook others.
AI cannot replace domain expertise, methodological judgment, or the interpretive skill that comes from deep engagement with your data.
AI performance may vary across languages, particularly for less-resourced languages. If you are working in a language other than English, exercise additional scrutiny on AI outputs.
Text-to-speech quality varies by language. While 90+ languages are supported, naturalness and pronunciation accuracy differ across language models.
Known Limitations by Context
Language Performance: AI analysis quality varies significantly by language. Performance is strongest in English, with demonstrated capability in major European languages (French, German, Spanish). Performance may be less reliable for less-resourced languages, including many languages spoken in humanitarian contexts. Always validate AI outputs against source data when working in non-English languages.
Cultural Context: The underlying AI models were primarily trained on English-language internet text. Analysis of responses from cultural contexts underrepresented in that training data — including many communities in the Global South — may reflect systematic gaps or biases. Cross-cultural research should apply additional human verification.
Demographic Patterns: AI models may produce outputs that reflect societal biases present in training data, including patterns related to race, ethnicity, gender, religion, and socioeconomic status. When analyzing data about marginalized or vulnerable communities, apply heightened scrutiny to AI-generated themes and classifications.
What does not use AI
Not everything in Heurista involves AI. Quantitative analysis features — charts, frequency distributions, cross-tabulations, statistical summaries — use deterministic algorithms, not AI models. Survey distribution, response collection, data storage, and export formatting are also entirely non-AI processes. When a feature uses AI, it is clearly indicated in the interface.
High-Stakes and Humanitarian Use Cases
If you intend to use AI-generated outputs to inform decisions that may materially affect the welfare, rights, safety, or access to services of individuals or populations — including humanitarian programming, protection assessments, resource allocation, or policy recommendations — you must independently verify all AI-generated findings through non-AI methods before relying on them. AI outputs from Heurista are decision-support tools and must not serve as the primary basis for such decisions.
Your Responsibilities When Using AI Features
AI-augmented research requires shared responsibility between the platform and the researcher. Heurista provides the tools and safeguards; you provide the judgment and accountability.
Verify against primary data
Always check AI-generated findings, codes, and summaries against your original response data. Use the source citations and audit trail to trace every claim back to evidence.
Apply professional judgment
Treat AI suggestions as one input among many. Your expertise, familiarity with the research context, and methodological training should guide your final decisions.
Disclose AI assistance
When publishing research that used Heurista's AI features, disclose AI assistance as required by your journals, funders, institutions, or ethical review boards. Heurista's audit trail can support this disclosure.
Do not misrepresent AI-generated work
AI-generated analysis should not be presented as solely human work. Transparent reporting of methods — including AI-assisted methods — is a cornerstone of research integrity.
Report concerns
If you encounter AI outputs that seem harmful, biased, inappropriate, or factually wrong, report them to us. Your feedback directly improves how we deploy AI in research contexts.
Compliance & Standards
EU AI Act
We are committed to meeting the transparency obligations set out in the EU AI Act, including clear disclosure of AI-generated content, documentation of AI system capabilities and limitations, and user notification when interacting with AI systems. For general research and academic use, Heurista functions as a decision-support tool requiring human oversight. However, users who apply AI-generated outputs to decisions that determine access to essential services, humanitarian aid allocation, protection assessments, or resource distribution may be deploying the system in a context that triggers high-risk classification under the EU AI Act. In such cases, the deploying organization bears responsibility for conducting a conformity assessment and implementing additional safeguards required by the Act. Contact us at legal@heurista.com for guidance on high-risk use contexts.
GDPR
AI data processing is conducted under lawful bases established by GDPR. We maintain data processing agreements with our AI provider that ensure your data is processed solely for the purpose of delivering the requested feature — not for any secondary use.
AI Provider Usage Policies
Our use of AI models complies with our provider's usage policies, which include prohibitions on generating harmful content, requirements for human oversight in high-stakes applications, and restrictions on using outputs to train competing models.
GDPR Article 22 — Automated Decision-Making
Heurista does not make automated decisions with legal or similarly significant effects on individuals. All AI outputs are presented as suggestions requiring human review before application. If you believe an AI-generated output has been applied to a decision affecting you without appropriate human review, contact privacy@heurista.com.
Data Protection Impact Assessments
We have conducted Data Protection Impact Assessments for our AI data processing activities. A summary is available upon request by contacting privacy@heurista.com.
Ongoing Monitoring
AI regulation is evolving. We actively monitor regulatory developments across jurisdictions and update our practices as new requirements emerge. This disclosure document is a living document that reflects our current practices and commitments.
Feedback & Concerns
We take AI-related concerns seriously. If you have questions about how AI is used in Heurista, encounter an AI output that concerns you, or want to provide feedback on our AI practices, please reach out.
AI Feedback: ai-feedback@heurista.com
General Inquiries: info@heurista.app
We respond to all AI-related inquiries within five business days. Reports of harmful or biased AI outputs are prioritized for immediate review.
Changes to This Disclosure
This disclosure is updated whenever we add, remove, or significantly modify AI-powered features. We also update it in response to changes in AI regulations, our AI provider's policies, or our own practices.
When material changes are made, we will notify active users via email and display a notice within the application. Minor clarifications or formatting changes may be made without notification.
The effective date at the top of this page always reflects when the most recent substantive update was published. Previous versions are available upon request.
Version 1.1 — April 14, 2026. Added configurable AI provider language, enterprise private cloud deployment section. Previous version: 1.0 (March 16, 2026).