Skip to the main content.

Manage your portfolio with ease and evaluate potential investments.

The platform is fully synced with Companies House, to provide you with accurate, real-time insight.

Request a demo

manage iconManage

Add your investments for complete visibility of your shareholdings. View cap tables and detailed share movements.

organise iconOrganise

Organise investments by fund, geography or sector, and view your portfolio as a whole or by individual company.

scenario iconModel

Explore future value scenarios based on various growth trajectories, to figure out potential payouts.

streamline iconStreamline

Remove friction and save time. Action shareholder resolutions via DocuSign, access data rooms, and get updates from founders.

SPVs iconSPVs

Set up and manage new SPVs without leaving the platform, then invite co-investors to fund and participate.

capterra rating
guide-thumbnail
The Joy of Enterprise Management Incentives
Read our free guide to the UK's most tax-efficient share scheme.
Get the guide

7 min read

Responsible AI: what startups need to know

Responsible AI: what startups need to know

Table of Contents

Last updated: 12 December 2023.

The world is witnessing a Cambrian explosion of AI tools that can do almost anything, from creating award-winning art to writing academic articles and crunching advanced equations.

Mass AI adoption is now a matter of “when” and not “if”. In 2022, around 91.5% of leading companies actively invested in AI, and 75% integrated AI into their business strategies, according to Accenture.

While it’s nigh-impossible to avoid being swept up by AI and its dizzying potential, startups need to take a deep breath and plan their AI-supported strategies carefully.

Here’s what startups need to know about responsible AI. 

AI’s chequered past

To use AI responsibly, it’s first crucial to understand why ethical debates surrounding AI exist in the first place. There are many examples of “AI gone wrong” to list, but here’s a snippet of high-profile examples: 

  • Driverless vehicles repeatedly fail to recognise darker skin colours.
  • Facial recognition models consistently misidentify black men and women, leading to the false arrest of at least three men in the USA.
  • A 2018 study entitled Gender Shades found evidence of bias in over 180 algorithms used by top companies like Google, IBM and Apple.
  • Amazon scrapped an AI recruitment tool which proved biased against women.
  • Apple scrapped a credit card that granted men higher credit limits despite identical circumstances.
  • IBM scrapped a multi-billion dollar AI project designed to diagnose cancer after it failed to deliver on expected outcomes.
  • OpenAI’s CLIP and GPT models remain biased against minority groups, and while GPT-3 has reduced the risk of this occurring, OpenAI admits ongoing problems.
  • The Guardian reported that many AI algorithms wrongly censor and suppress the reach of photos featuring women's bodies.

Discussions of AI’s shortcomings quickly get gloomy, partly because the potential for good is so great - it’s disappointing when we don’t fulfil that potential. Of course, AI adoption has many upsides, but it’s something we must strive for rather than take for granted. 

Why AI is fallible

AI models are fallible as the data used to train them is rarely perfect.

For instance, GPT-3 is trained primarily on data retrieved from the internet. It makes sense - the internet is the largest data source on the planet, and most of it is free.

However, the internet remains a relatively new invention, it’s primarily written in English, and only half the world’s population uses it, let alone contributes to it. Not only does it have blind spots, but it also inherits bias and prejudice from contributors.

Treating the internet as a panacea for knowledge is risky, and models trained on internet data represent the internet’s blind spots and biases.

AI is error-prone

We often imagine AI to possess superhuman judgment and objectivity, but this isn’t the case. It’s only as fair, just and objective as it’s designed to be. There’s also been a lag in producing the datasets required to train accurate and effective models.

For example, some facial recognition ‘gold sets’ (datasets deemed among the best in their class) are heavily weighted towards white men.

This is partially to blame for poor performance among some facial recognition AIs that repeatedly fail individuals who aren’t white.

Similarly, data used in failed recruitment AIs reflected a time when women and other minority groups were underrepresented in the job roles they were trying to fill.

Researchers and AI ethicists have been keen to point this out, as building a public consciousness of AI’s potential blindspots is essential to emphasise it as a force of good.

Ultimately, AI is modelled on organic structures, i.e. the human brain. The human brain is far from infallible; thus, neither is AI. 

But why does this all matter for startups anyway?

Well, if startups invest and use AI that “goes wrong," they’ll be culpable for the consequences, which could be both financial and reputational.

Moreover, startups are more vulnerable than big companies as they tend to lack dedicated AI ethics departments and have to manage due diligence internally alongside many other tasks.

Principles for ethical AI

For startups, building an overarching understanding of what responsible AI looks like is essential.

A widely cited paper by AI ethicists at Oxford University highlights five key principles for responsible AI

1. Benevolence

AI must do something good, such as preventing the spread of infectious diseases, analysing pollution to improve air quality, automating potentially dangerous tasks to reduce human injury, etc.

In a commercial context, benevolent AI can streamline tasks to reduce time-consuming manual labour or improve upon human decision-making.

2. Non-malevolence

By doing good, AI mustn’t inflict malevolent effects or side effects. For example, an AI shouldn’t manipulate financial markets to earn money if the consequence is eroding people’s incomes.

AI designed to replace human decision-making must remain conscious of the cost, i.e. loss of jobs and human productivity. After all, if AI takes everyone’s jobs, people won’t have incomes, and governments won’t be able to raise money through taxes.

3. Explicability

AIs should be explicit and auditable, e.g. they should be able to explain why they came to a decision. No AI should be a ‘black box’, meaning we only see inputs and outputs and not what’s going on inside the algorithms.

4. Justice and fairness

AI should embed the social values we want them to reflect. If we train models on historical data, we should expect them to reflect historical values, which aren’t desirable regarding diversity and inclusion.

5. Autonomy 

The autonomy given to machines should be restricted, and crucially, reversible. AI should be predictable, e.g. it does what we expect it to do and intend it to do.

Even ardent futurists like Elon Musk worry about what will happen when AI is relinquished from human control to act with total autonomy. 

Treat AI investment the same as any other by ensuring tools and their uses align with your company's culture and governance style. Keep that tech stack under control and avoid building a sprawling, uncontained stack of AI tools. 

Responsible AI use in startups

AI is a social movement as well as a business movement, and humanity must work collaboratively to control it. If we do allow it to spiral out of control, the stakes could barely be higher. 

Professor David Shrier at Imperial College London sums this up well:

The costs of failing to responsibly deploy technologies are existential, not only for individual organisations, but for entire countries. 

Startups are set to play a pivotal role in steering AI usage in the direction of ethical and responsible use. Responsible AI usage doesn’t just ward off the negative consequences of “AI gone wrong”, but it’s also a marker of sound business governance. 

1. Consider the cost and the trade-off

If a bank can make 5% more for its shareholders through automation, is it obliged to do so? What if that comes at the cost of thousands of jobs? Are there other consequences?

These types of questions are very real, and striking a balance between automation and its consequences is exceedingly tricky. Startups have the opportunity to plan for these problems before they approach a critical mass.

Consider the implication of automation; job losses, low morale, unsatisfactory work, lack of clarity and loss of talent are all known side effects. Just because an AI can replace someone, should it? And can it really do the job better? 

2. Use AI as an aid

One way to negotiate the above issue is to harness AI’s additive benefits. Rather than automating tasks to remove human input, use AI to scale up people’s skills. 

Chat GPT is an excellent example of this. Soon, we’ll be able to complete a wide range of labour-intensive using prompts, from writing articles to designing web pages or even building other machine learning models. 

The results are impressive, but they become much more substantial when combined with human input. Chat GPT and associated apps will raise the bar and set a new, higher standard for genuine human work. 

After all, if everyone can use AI to complete a task, businesses will need to find new ways to differentiate themselves. Responsible AI usage is a valuable component of that process. 

3. Be aware of AI’s fast-changing nature

AI is evolving rapidly, which is risky for the thousands of businesses that are building workflows that wholly rely on it. Startups shouldn’t throw everything behind a single tool just to have the rug pulled from under their feet.

For example, in February 2023, Chat GPT suddenly added a Chat GPT Plus option for $20 a month with privileges over the free version. You’ve got to ask, what developments are forthcoming, and what will they change for businesses?

Will OpenAI, Google, etc, permit any and all businesses to use their products at their will? What happens when regulations shake up the AI industry?

Right now, a handful of major players hold the aces, and their monopoly makes AI investment precarious. 

4. Keep an eye on regulations

In 2021, the European Commission released a proposal for Artificial Intelligence Regulation, “a proposal for a Regulation laying down harmonised rules on artificial intelligence.”

On December 8 2023, European Union lawmakers made headway with the EU AI Act (the world's first serious move to govern AI).

The EU AI Act aims to regulate AI use in Europe, setting rules for AI tools like ChatGPT. Although the full text won't be out until 2024, startups can prep for compliance. 

The UK Government on the other hand doesn't intend to introduce new legislation. Keen to "establish the UK as an AI superpower", in a white paper foreword, the Rt Hon Michelle Donelan MP describes a different approach:

We set out a proportionate and pro-innovation regulatory framework. Rather than target specific technologies, it focuses on the context in which AI is deployed.

In other words, watch this space!

5. Take AI governance seriously

Startups should consider who’s in charge of their AI tools and how they can implement security measures to govern sensitive data. 

When regulations come into force, AI vendors might not necessarily comply with them by default, meaning businesses will need to conduct their own audits. To govern and manage AI internally, ask the following questions.

  1. Who is accountable?
  2. Does AI usage align with business strategy and culture?
  3. Do users understand their AI tools?
  4. Can you ensure AI processes are consistent?
  5. Are there audit trails or ways to track performance and benefits?

Screening AI will accompany other types of risk assessments. Consider creating a written AI risk management strategy, write down all the tools you use, their potential security risks etc.

And if you haven't already, start drafting an internal AI usage policy and facilitating training for your team.

The future of AI

Transparency, accountability and education will take centre stage as AI innovation forges on into the future. Governments and regulators are desperate to regulate AI to control and harness its usage – startups need to stay ahead of these developments.

While big businesses have the clout and money to invest in AI ethics departments and dedicated functions for AI responsibility, startups can’t always afford the privilege.

Planning a careful, considerate and transparent approach to AI investment and usage will win the day. And don’t forget to keep tabs on the latest developments in AI laws and regulations!

We're on a mission to help startup founders and their teams succeed. That's why we've created a suite of equity management tools. Discover what Vestd can do.

Why is exit readiness important? Part one

Why is exit readiness important? Part one

Exit readiness is a topic not widely considered in the SME market. Often an exit process is sparked from an unsolicited trade offer or an approach...

Read More
Why the UK is the best place to set up a limited company

Why the UK is the best place to set up a limited company

As a business owner, there are many different ways to organise your company. Each distinct way offers its own advantages, and the type you choose...

Read More