Connect with us

Technology

Ethical AI: Preparing Your Organisation for the Future of AI

Rosemary J Thomas, Senior Technical Researcher, AI Labs Version 1

Artificial intelligence is changing the world, generating countless new opportunities for organisations and individuals. Conversely, it also poses several known ethical and safety risks, such as bias, discrimination, privacy violations, alongside its potential to negatively impact society, well-being, and nature. It is therefore fundamental that this groundbreaking technology is approached with an ethical mindset, adapting practices to make sure it is used in a responsible, trustworthy, and beneficial way.

To achieve this, first we need to understand what an ethical AI mindset is, why it needs to be central, and how we can establish ethical principles and direct behavioural changes across an organisation. We must then develop a plan to steer ethical AI from within and be prepared to take liability for the outcomes of any AI system.

What is an ethical AI mindset

An ethical AI mindset is one that acknowledges the technology’s influence on people, society, and the world, and understands its potential consequences. It is based on the perception that AI is a dominant force that can sculpt the future of humankind. An ethical AI mindset ensures AI is allied with human principles and goals, and that it is used to support the common good and the ethical development of all.

It is not only about preventing or moderating the adverse effects of AI, but also about exploiting its immense capability and prospects. This includes developing and employing AI systems that are ethical, safe, fair, transparent, responsible, and inclusive, and that respect human values, autonomy, and diversity. It also means ensuring that AI is open, reasonably priced, and useful for everyone – especially the most susceptible and marginalised clusters in our society.

Why you need an ethical AI mindset

Functioning with an ethical AI mindset is essential[1].  Not only because it is the right thing to do, but also because it is expected, with research showing customers are far less likely to buy from unethical establishments. As AI evolves, the expectation for businesses to use it responsibly will continue to grow.

Adopting an ethical AI mindset can also help in adhering to current, and continuously developing, regulation and guidelines. Governing bodies around the world are establishing numerous frameworks and standards to make sure AI is used in an ethical and safe way and, by creating an ethical AI mindset, we can ensure AI systems meet these requirements, and prevent any prospective fines, penalties, or court cases.

Additionally, the right mindset will promote the development of AI systems that are more helpful, competent, and pioneering. By studying the ethical and social dimensions of AI, we can invent systems that are more aligned with the needs, choices, and principles of our customers and stakeholders, and can provide moral solutions and enhanced user experiences.

Ethical AI as the business differentiator

Fostering an ethical AI mindset is not a matter of singular choice or accountability, it is a united, organisational undertaking. To integrate an ethical culture and steer behavioural changes across the business, we need to take a universal and methodical approach.

It is important that the entire workforce, including executives and leadership, are educated on the need for AI ethics and its use as a business differentiator[2]. To achieve this, consider taking a mixed approach to increase awareness across the company, using mediums such as webinars, newsletters, podcasts, blogs, or social media. For example, your company website can be used to share significant examples, case studies, best practices, and lessons learned from around the globe where AI practices have effectively been implemented. In addition, guest sessions with researchers, consultants, or even collaborations with academic research institutions can help to communicate insights and guidance on AI ethics and showcase it as a business differentiator.

It is also essential to take responsibility for the consequences of any AI system that is developed for practical applications, despite where organisations or products sits in the value chain. This will help build credibility and transparency with stakeholders, customers, and the public.

Evaluating ethics in AI

We cannot monitor or manage what we cannot review, which is why we must establish a method of evaluating ethics in AI. There are a number of tools and systems than can be used to steer ethical AI, which can be supported by ethical AI frameworks, authority structures and the Ethics Canvas.

An ethical AI framework is a group of values and principles that acts as a handbook for your organisation’s use of AI. This can be adopted, adapted, or built to suit your organisation’s own goals and values, with the stakeholders involved in its creation. An example of this can be seen in the UK Government’s Ethical AI Framework[3], and the Information Commissioner’s Office’s AI and data protection risk toolkit[4] which covers all ethical risks in the lifecycle stages – from business requirements and design to deployment and monitoring for AI systems.

An ethical AI authority structure is a group of roles, obligations and methods that make sure your ethical AI framework is followed and reviewed. You can establish an ethical AI authority structure that covers several aspects and degrees of your organisation and delegates clear obligations to each stakeholder.

The Ethics Canvas can be used in AI engagements to help build AI systems with ethics integrated into development. It helps teams identify potential ethical issues that could arise from the use of AI and develop guidelines to avoid them. It also promotes transparency by providing clear explanations of how the technology works and how decisions are made and can further increase stakeholder engagement to gather input and feedback on the ethical aspects of the AI project. This canvas helps to structure risk assessment and can serve as a communication tool to convey the organisation’s commitment to ethical AI practices.

Ethical AI implications

Any innovation process, whether it involves AI or not, can be marred a fear of failure and the desire to be successful in the first attempt. But failures should be regarded as lessons and used to improve ethical experiences in AI.

To ensure AI is being used responsibly, we need to identify what ethics means in the context of our business operations. Once this has been established, we can personalise our message to the target stakeholders, staying within our own definition of ethics and including the use of AI within our organisation’s wider purpose, mission, and vision.

In doing so, we can draw more attention towards the need for responsible use policies and an ethical approach to AI, which will be increasingly important as the capabilities of AI evolve, and its prevalence within businesses continues to grow.


[1] https://www.mckinsey.com/featured-insights/in-the-balance/from-principles-to-practice-putting-ai-ethics-into-action

[2] https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1258721/full

[3] https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety

[4] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ai-and-data-protection-risk-toolkit/

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Building Compliance into Business Culture is Essential in Fintech

Source: Finance Derivative

Tetyana Golovata, Head of Regulatory Compliance at IFX Payments

Regulation plays a critical role in shaping the fintech landscape. From Consumer Duty and FCA annual risk reporting to APP fraud, the tectonic plates of the sector are shifting and whether you consider these regulations as benefiting or hindering the industry, businesses are struggling to keep up. 

According to research by fraud prevention fintech Alloy, 93% of respondents said they found it challenging to meet compliance requirements, while in a new study by Davies a third of financial leaders (36%) said their firms had been penalised for compliance breaches in the year to June. With the FCA bringing in its operational resilience rules next March, it is more important than ever to ensure your company makes the grade on compliance. 

Lessons from history

Traditionally, FX has struggled with the challenge of reporting in an ever-developing sector. As regulatory bodies catch up and raise the bar on compliance, responsible providers must help the industry navigate the changes and upcoming deadlines.

Fintechs and payments companies are entering uncharted waters – facing pressure to beat rivals by offering more innovative products. When regulators have struggled to keep up in the past, gaps in legislation haveallowed some opportunists to slip between the net, as seen in the collapse of FTX. Because of this, implementation and standardisation of the rules is necessary to ensure that innovation remains seen as a force for good, and to help identify and stamp out illegal activity.

Culture vs business

Culture has become a prominent factor in regulatory news, with cases of large fines and public censure relating to cultural issues. As the FCA’s COO Emily Shepperd, shrewdly observed in a speech to the finance industry, “Culture is what you do when no one is looking”.

Top-level commitment is crucial when it comes to organisational culture. Conduct and culture are closely intertwined, and culture is not merely a tick-box exercise. It is not defined by perks like snack bars or Friday pizzas; rather, it should be demonstrated in every aspect of the organisation, including processes, people, counterparties, and third parties.

In recent years, regulatory focus has shifted from ethics to culture, recognising its crucial role in building market reputation, ensuring compliance with rules and regulations, boosting client confidence, and retaining employees. The evolving regulatory landscape has significantly impacted e-money and payments firms, with regulations strengthening each year. Each regulation carries elements of culture, as seen in:

  • Consumer duty: How do we treat our customers?
  • Operational resilience: How can we recover and prevent disruptions to our customers?
  • APP fraud: How do we protect our customers?

Key drivers of culture include implementing policies on remuneration, conflicts of interest, and whistleblowing, but for it to become embedded it must touch employees at every level.

This is showcased by senior stakeholders and heads of departments facilitating close relationships with colleagues across a company’s Sales, Operations, Tech and Product teams to build a collaborative environment. 

Finance firms must recognise the trust bestowed on them by their customers and ensure the protection of their investments and data is paramount. Consumer Duty may have been a wake-up call for some companies, but progressive regulation must always be embraced and their requirements seen as a baseline rather than a hurdle.

Similarly, the strengthening of operational resilience rules and the upcoming APP fraud regulation in October are to be welcomed, increasing transparency for customers. 

Compliance vs business 

Following regulatory laws is often viewed as a financial and resource drain, but without proper compliance, companies are vulnerable to situations where vast amounts of money can be lost quickly.

A case in point is the proposed reimbursal requirement for APP fraud, which will mean payment firms could face having to pay compensation of up to £415,000 per case.

Complying not only safeguards the client and their money, but also the business itself. About nine in ten (88%) financial services firms have reported an increased compliance cost over the past five years, according to research from SteelEye.  Embedding compliance earlier in business cultures can be beneficial in the long run, cutting the time and money needed to adapt to new regulations and preventing the stress of having to make wholesale changes rapidly. 

Building a cross-business compliance culture 

Compliance is a key principle at IFX, and we strive to be a champion in this area. In response to these challenges, the business restructured, establishing dedicated risk and regulatory departments, along with an internal audit function. 

Regulatory compliance aims to support innovation by developing and using new tools, standards, and approaches to foster innovation and ensure product safety, efficacy, and quality. It has helped the firm to navigate the regulatory landscape while driving growth and maintaining high standards.

This organisational shift allowed each business line to own its own risk, with department partaking in tailored workshops designed to identify existing, new, and potential risk exposure. Shared responsibility for compliance is the only way to create a culture which values it. We see this as a great way for organisations to drive innovation while sticking to the rules. 

Continue Reading

Business

How AI virtual assistants are transforming education and training

By Gregor Hofer, CEO and Co-founder at Rapport

What separates good doctors from excellent doctors, the type that might get five-star reviews if, like an Uber driver, their services were supported by a smartphone app?

Medical knowledge, expertise, and better outcomes are, of course, the most important factors. But – particularly when dealing with patients’ relatives, discussing risk assessment and imparting bad news – we shouldn’t underestimate the importance of bedside manner.

This might come naturally to some doctors but there are none for whom training isn’t useful, whether at medical school or on the job.

There will always be a place for real human interaction in this training, the type that involves role-play, with actors or colleagues playing out different scenarios that explore the most effective ways to handle difficult situations.

But what if this could be supplemented by more readily available and less resource-intensive experiences that simulate these training environments? And what if it could be applied across numerous sectors, industries and professions, of which there are a great many that could benefit from such an opportunity?

What might that mean for those instigating tricky conversations and, perhaps more importantly, those at the receiving end of them?

Advances in generative artificial intelligence – or GenAI – mean that these are no longer hypothetical questions.

There’s no limit to the type of person this technology could help, but we’ll review three – doctors, those working in corporate HR, and online students – to give a flavour of the benefits it brings.

Before we do, a quick word on how such applications work.

An overview of the technology

It all starts with data. With access to enough content, the type that you store and curate on your internal systems, large language models (LLMs) can be trained to find the most appropriate response to whatever user input they’re exposed to, whether in writing or spoken, and then you as a user can respond to that response, and so the cycle continues.

You’ll have experienced something similar using the likes of CharGPT, but because this is based on your own content, you’re more in control. (For simpler and more prescriptive scenarios, though, I’d add that with the best solutions, you can alternatively import predefined branching dialogue to keep your conversations on track.)

It doesn’t stop there, though; by tapping into a solution that’s supported by experts in linguistics and computer-aided animation, your colleagues can interact in real-time with avatars equipped with believable facial expressions, accurate lip-synching capabilities, natural gestures and the ability to detect emotions.

All of this adds to the user’s willing suspension of disbelief that they’re interacting with a real person, or AI avatar, thereby enhancing the effectiveness of their learning.

These innovations are reshaping how we approach learning and skill development in so many critical fields. We said we’d look at three. We’ll start by returning to medicine.

Medical training

AI assistants can supplement the way doctors are taught to break bad news to patients, one of the hardest things they’ll face in practice and, given its subjectivity, something that can’t easily be looked up in a textbook on anatomy or physiology.

As we said from the outset, this is easier for some doctors than others, but given the literal life-and-death nature of such conversations and the shattering impact that the death of a loved one can have on a relative, there’s always room to improve medics’ empathy and communication skills – which is exactly what this technology delivers.

By utilizing experiential AI tools, clinicians can better use their time, alleviate pressure, fatigue and burnout symptoms, and ultimately allow them to better serve their patients.

Corporate HR

In corporate HR, virtual assistants can significantly streamline and enhance the hiring and firing process, as well as any difficult conversation; whether it’s a tough review, a disciplinary hearing, letting down an employee about a promotion they’d applied for or any other scenario that might bring a bead of sweat to your forehead, it’s all about providing safe and cost-effective practice before doing it for real.

Tech research consulting firm Gartner recently found that more than three-quarters (76%) of HR leaders believe that if their organisation doesn’t adopt and implement AI solutions, such as generative AI, in the next 12 to 24 months, they’ll lag in organizational success compared to those that do, while 34% of HR leaders participating in their January benchmarking session said they were exploring potential use cases and opportunities when it came to generative AI.

If they do manage to adopt the right technology, the impact will be massive among those who deploy it wisely. After all, which company wouldn’t want to upskill its HR professionals in tangible soft skills such as empathy, communication, problem-solving, and conflict resolution in a controlled setting?

Online education

AI-powered tools can hugely boost student engagement in remote learning environments, and the research suggests that it comes close to rivalling in-person experiences. When you consider the staff-to-student ratios common in most educational settings, this should be no surprise – think how many students can fit into a lecture hall (even if they don’t always turn up!).

But we’re not necessarily talking about formal education; this applies equally to any informal setting in which someone needs to improve their education in some way.

With this technology, you can invent new ways to educate your students – or staff – by transforming lessons into experiences, using interactive characters reflective of the subject. This means you can increase user satisfaction and performance without compromising on content.

Whatever the scenario and whatever the use case, the chances are that if you have the right content in sufficient quantities, you can tap it for interactions that would otherwise be lacking in uniqueness or prohibitively expensive.

With AI virtual assistants, everyone’s a winner.

Continue Reading

Business

How GenAI is Shaping the Future of Compliance

Gabe Hopkins, Chief Product Officer, Ripjar

Generative AI or GenAI uses complex algorithms to create content, including imagery, music, text, and video with amazing results. Less well known are some of the ways in which it can transform data processing and task performance. This groundbreaking technology not only saves time, effort, and money, but has become a game-changer in enhancing operational efficiency and fostering innovation across various sectors.

However, some industries like anti-financial crime compliance – have been slow to adopt new innovations like GenAI, predominantly due to concerns over potential risks. In fact, they can even see it as a risk in itself. Legal, Compliance and Privacy leaders rank rapid GenAI adoption as their top issue in the next two years, all while other, less risk-averse organisations enjoy the upside of implementing GenAI in their systems.

This delay means many compliance teams are not taking advantage of AI tools that could revolutionise their processes and help them save up to 200 hours annually per user.

Entering the New Era of GenAI in Compliance

Teams in largely regulated sectors like banking and fintech face enormous pressures. Their responsibilities include identifying risks, such as sanctioned individuals and entities, updating policies to keep up with ever-evolving regulations, and handling expansive datasets. The high volume of this data makes manual reviews exhausting and susceptible to errors, which can lead to financial and reputational damage.

One way to overcome these challenges is by leveraging GenAI. For example, false positives – where a risk is raised incorrectly or false negatives, where a real risk is not flagged, are common issues caused by trying to deal with very high volumes of alerts and risk matches. Implementing GenAI can reduce these inaccuracies, significantly enhancing the efficiency and effectiveness of customer and counter-party screenings.

In practical terms, GenAI can reinvent how compliance tasks are performed. For instance, in drafting Suspicious Activity Report (SAR) narratives, where analysts need to justify suspicions in transactions, GenAI can help automate this writing process, combining human oversight with artificial efficiency. Platforms using GenAI excel in summarising vast amounts of data— crucial for tasks like screening adverse media, where they assist in identifying potential risks linked to negative information about clients.

 Understanding the Opportunities of GenAI and Overcoming Fears

For the compliance sector, it’s a crucial time to explore how to incorporate GenAI effectively and securely without undue risks. Dispelling fears about data misuse, the high costs of initial model setups, and the ‘black box’ nature of AI models are central to this transition. Teams are particularly cautious about sharing sensitive data and the hidden biases that AI might carry.

Yet, some strategies can counter these challenges. By choosing suitable models that ensure robust security and privacy and adjusting these models within a solid statistical framework, biases can be mitigated. However, organisations will need to turn to external expertise – whether data scientists or qualified vendors – to support them in training and correctly deploying AI tools.

The latest advancements in GenAI suggest that virtual analysts powered by this technology are achieving, and sometimes surpassing, human-level accuracy. Despite ongoing concerns, which may slow adoption rates, the evident potential benefits suggest a bright future for compliance teams using GenAI. These technological innovations promise not only to improve speed and efficiency but also to enhance the capability of teams to respond and adapt swiftly.

Embracing GenAI will not only significantly elevate the effectiveness of compliance operations but also safeguard organisations against potential pitfalls while maintaining trust and integrity in their industry practices.

Continue Reading

Copyright © 2021 Futures Parity.