Privacy & Governance Summary
Responsible AI in Sales Training
NovaSQ provides sales transformation training that leverages Generative AI to improve research, preparation, and messaging quality.
We recognise that for enterprise clients, the primary risks associated with AI are data leakage and low-quality/hallucinated outputs.
Our "Relevant by Design" framework is built to be safe-by-default. We do not process client data, nor do we require access to internal systems.
1. Provider details
Provider: NovaSQ Pty Ltd (NovaSQ)
ABN: 34 693 165 114
Address: 470 St Kilda Road, Melbourne VIC 3004,Australia
Email: info@novasq.com.au
2. The "External Processor" Mandate
We teach a non-negotiable operating standard: All AI tools are treated as external processors. * The Safe-to-Email Test: Participants are instructed never to input any information into an AI tool that would not be safe to email to an external third party.
• Zero Trust Policy: We assume all public LLMs may use data for training unless specific Enterprise/API configurations are in place; therefore, no proprietary data is permitted
3. Data Handling & Workshop Architecture
• No Access to CRM/Internal Data: The workshop does not require CRM exports, pipeline data, or access to internal knowledge bases
• Participant Sovereignty: All "Engagement Assets" and outputs are created on the participants' own devices. NovaSQ does not take copies, store, or host any work products generated during the session
• Anonymisation: Workshop exercises use high-level industry contexts or anonymised "placeholder" data to demonstrate workflows without risking real-world identifiers
4. Prohibited Data (The "Never" List)
Our training includes a strict prohibition on the following inputs:
• Real customer or prospect contact details
• Pricing schedules, internal margins, or deal values
• Legal contracts, SOWs, or proprietary proposals
• Passwords, API keys, or internal system URLs
5. Technical Guardrails
We provide participants with specific technical instructions to maintain privacy:
• Temporary Chat Mode: We encourage the use of "Temporary" or "Incognito" modes to prevent data from persisting in model history
• Memory Management: We instruct users on how to disable "Memory" features that might inadvertently store sensitive workflow context
• Tool-Specific Guidance: We align our training with your organisation’s approved AI stack (e.g., ChatGPT Enterprise, Microsoft Copilot, Claude for Work)
6. Output Integrity & Accountability
To mitigate the risk of "hallucinations" or generic AI content:
• Human-in-the-Loop: We enforce a "Last Line of Defence" rule. No AI-generated content is sent to a customer without human verification, fact-checking, and "Thinking Quality" review
• Attribution & Truth: Participants are taught to verify AI-suggested claims against primary sources (Company Annual Reports, official websites, etc.) before use