Insights

Table of contents:

    Using AI Responsibly: What Startups and Scaleups Should Include in a Company AI Policy

    For startups and scaleups, having a clear company AI policy is becoming essential as teams adopt generative AI tools across day-to-day operations. A policy is a written document, but it is also an agreed set of principles about how a team behaves and given the prevalence of AI tools at the moment, it is important to agree those principles which then apply uniformly.

    We’ve seen that in many startups and scaleups, AI use builds gradually rather than through a single decision. Tools already in use introduce AI features as standard. Team members use chatbots to draft responses or summarise information. Developers rely on coding assistants. Sales teams automate call notes. Before long, AI is embedded across day-to-day operations.

    Used well, this can be a real competitive advantage. Used without structure, it can introduce legal, data and commercial risk. As regulation and customer expectations evolve, AI is no longer treated as experimental. It is increasingly viewed as ‘business as usual’, with the same standards around privacy, security and accountability that apply elsewhere in your company.

    That is where a clear AI policy becomes important, as a way to give teams confidence to use AI responsibly as the business grows and in a way that aligns with the company’s culture and overall business ethos.

    So what should founders and leadership teams actually include in their company AI policy?

    Start with the purpose of your company AI policy

    A good AI policy should open by making two things clear:

    1. that the company actively supports the responsible use of AI to improve productivity, quality and customer outcomes.

    2. that the policy applies to anyone using AI for company work. This usually includes employees, contractors and advisors, and covers both internal uses and customer-facing features, as well as AI embedded in third-party tools.

    This approach sets expectations early, so that your teams have confidence to use AI while making clear that there are boundaries in place.

    Be explicit about ownership and accountability

    One of the most common issues we see in growing companies is that AI is used everywhere, but owned by no one.

    Your policy should clearly set out:

    who owns and maintains the AI policy,

    who approves higher-risk uses, for example where AI touches customer data (especially sensitive PII) or decision-making,

    • and how issues are escalated if something goes wrong.

    This does not need to be complex. Even a simple approval and escalation path can remove a lot of ambiguity as teams scale.

    Define acceptable use in your AI policy

    Policies tend to fail when people cannot quickly tell what is allowed and what is not.

    A common question we hear from founders is whether employees can use tools like ChatGPT at work, and if so, under what conditions.

    A practical approach is to group AI use into three categories.

    1. Allowed uses might include brainstorming, summarising internal notes, translating text, or drafting non-sensitive marketing copy.

    2. Uses that require approval often include anything customer-facing, anything involving personal data, or anything that could materially affect users or the business, such as automated decision-making, pricing logic or safeguarding workflows.

    3. Prohibited uses typically include inputting confidential client information, credentials, unreleased product details, private source code or third-party datasets you do not have rights to use into non-approved AI tools.

    Clarity here enables faster, safer adoption.

    Data protection and personal data in AI policies

    If your team is using AI day to day, your policy needs to explain how data protection applies in real situations.

    This should cover:

    • what counts as personal data and sensitive data in your business,

    • when personal data can be used with AI tools and when it cannot, and

    • what safeguards are required when it is permitted.

    For many companies, a sensible default is that personal data should not be entered into AI tools unless the tool is approved and the use case has been reviewed. Supporting principles around data minimisation, redaction and retention help teams make good decisions without constant oversight.

    Treat prompts and outputs as confidential by default

    AI prompts and outputs may be logged, stored or exposed in ways that are not always obvious.

    Your policy should make clear that:

    • prompts and outputs may be treated as business records,

    • confidential information, credentials and internal architecture should not be entered into non-approved tools, and

    • suspected data exposure should be reported promptly, even if it appears minor.

    This aligns AI use with existing security and confidentiality expectations.

    Clarify IP and ownership expectations

    AI can accelerate output, but it can also blur ownership if expectations are not set.

    Your policy should confirm:

    • that AI-assisted work created in the course of someone’s role belongs to the company,

    • that AI output must be reviewed before being relied on or published,

    • and that teams should avoid copying styles, brands or content that could create IP or copyright risk.

    A helpful principle is to treat AI output as a starting point rather than the final answer.

    Approving AI tools and vendors in your AI policy

    Many AI risks come not from what a company builds, but from the tools it adopts.

    Your policy should set expectations for approving AI-enabled tools, including:

    • what data the tool processes and where it is hosted,

    • whether inputs are used to train models,

    • what security and contractual protections apply,

    • and whether features can be configured or disabled.

    This is increasingly important as AI features are embedded into everyday platforms by default.

    Plan for where regulation is heading

    For startups and scaleups selling into the EU, AI regulation is no longer a future issue. Even if you are not building AI models yourself, you may still have obligations as a deployer of AI systems. Your policy should acknowledge this and provide a framework that can evolve as regulatory requirements mature.

    Building this in early is far easier than retrofitting later.

    Keep it live with training and review

    Finally, an AI policy should not sit unused. It should be supported by short, role-appropriate training, reviewed regularly, and updated when the business launches new AI-driven features or enters new markets. Aim for consistency, awareness and responsible scale.

    Choosing the right approach for your business

    There is no single correct AI policy. The right approach depends on your product, your data, your customers and your growth plans. It is actually very personal to how you plan to operate rather than a template.

    As such, what matters most is not adopting a generic template, but creating a policy that reflects how your company actually uses AI today and where you expect to be in the next few years.

    Read more on AI Data Ownership (The legal blind spots stalling data and AI start-up growth – and what to do about it) here!

    Accelerate Law provides strategic and legal advice to startups end-to-end through angel investment rounds and VC funding rounds, which includes supporting with SEIS and EIS matters, flexible funding for example through Advanced Subscription Agreements, and drafting and negotiating investment terms from term sheets through to completion. Upon securing funding Accelerate Law specialises in working with scaleups as fractional in-house lawyers, covering commercial contracts, IP, employment support, EMI Schemes and more. Contact us here or reach out to simon@acceleratelaw.co.uk to find out more.