Connect with us

Business

Are CEOs ready for the AI era?

Steps CEOs can take on their journey to becoming AI-first leaders.

Over the next decade, generative AI will transform the lives of all of us. But are we ready for it? In the workplace today, there’s a significant disparity between CEOs, their senior leaders and the teams who will be most likely impacted by AI about how prepared people really are for it.

The majority (56%) of executives in the C-suite are highly confident in their organisation’s leadership’s understanding of AI and its governance needs—but only 36% of CEOs share that sentiment. CEO confidence over AI readiness is in direct conflict with the pace of AI adoption they seek to achieve. Most business and IT leaders (92%) agree they need to shift their organisations to AI in 12 months or less— but CEOs are even more bullish, with half wanting it within the next six months. They’re also the least confident that their organisation will harness the benefits of AI faster than their competitors.

Contrary to the hyped fears that AI will replace the work humans do—we see lots to be positive about AI. Our AI readiness reports indicates that the majority of leaders (64%) disagree that AI will replace people’s jobs—rather, they expect to increase their headcount (by 9%) in 2024 as AI becomes more pervasive. The impact on jobs should be transformative rather than displacing.

How, then, can CEOs feel more confident about the state of their organisation’s AI readiness while also accelerating the pace of innovation? And what’s needed to bridge the gap between those leading AI adoption and scale and the people who must make it a reality?

Put people first in the era of AI.

Successful AI adoption keeps its focus on the organisation’s most important asset: its people. By using AI in their day-to-day jobs, people can gain up to three hours per day as AI makes quick work of more mundane tasks. In Avanade’s internal pilot group, users also reported a 50% improvement in collaboration and teamwork, a 40% increase in problem resolution, and a 70% greater likelihood of fostering a creative approach to tasks.

CEOs have work to do in building consensus around what’s needed. Most business and IT leaders (63%) believe employees will need some new skills or completely new set of skills to work with generative AI like Microsoft Copilot; conversely 41% of CEOs think employees will need fewer skills since AI copilots will do more of their work. However, less than half of employees completely trust augmenting their work with AI, suggesting there’s much more work to do to win the hearts and minds of the people using it in the enterprise.

AI is not a one-size-fits all: what works for people in human resources will be different than for marketing, finance, and IT. It’s critical to include stakeholders from across the business and incorporate their feedback as AI experiments take hold and be honest about where it’s fallen short of its intended goal.

Disrupt your organisation with AI but do it responsibly

Most senior leaders say they’re already using AI regularly in the workplace, but lack consensus around whether their people, processes and platforms are using it responsibly with clearly defined governance.

Only half (52%) of senior leaders believe their organisation has the human capital and workforce planning processes in place to safeguard roles as generative AI is scaled—and 49% admit they’re not very confident that their organisation’s risk management processes are adequate for an enterprise-wide technical integration of AI. Confidence also varies widely by industry; energy and banking industries are most confident, with government at the bottom of the list.

The first step to readiness is getting everyone on board with a responsible AI framework that ensures trust and transparency and that the necessary guardrails are in place before AI pilot projects even begin.

Such a framework clarifies your ‘why for AI,’ pinpointing where AI has the most potential to solve a business challenge and deliver the most immediate impact. Within a responsible AI framework, establish clear guiding principles that translate corporate values into guidelines for AI—including the critical risks not worth taking. Create clear processes for managing and mitigating risks, set clear performance management objectives and document all proposed and implemented AI use cases in a center of excellence that can manage, and provision technology resources as needed. And finally, ensure that employee skills and culture are ready to fully embrace AI by reinforcing guiding principles, providing training resources and reviewing ethical considerations.

When it comes to responsible AI, CEOs must realise that their work, and that of their people, is an ongoing journey. They must continually revisit what it means to use AI safely and ethically to sharpen their approach and principles as it evolves from largely an automation and productivity play today to something much more transformative tomorrow.

Ground your company’s use of AI within your purpose and values.

An articulated purpose helps guide organisations through good times and challenging ones—whether it’s navigating a global health crisis or the rollout of transformative technologies like generative AI. Grounding your organisation’s AI journey in your purpose puts into clear focus what you will and won’t do with the technology.

The ethical and safety considerations of AI will continue to crop up for leaders, but a company’s purpose never wavers. A responsible AI framework includes a set of guiding principles rooted in purpose and values.

Taking the first steps is the way forward

The success or failure of AI in organisations depends on its first few steps, which are the most critical. CEOs set the tone for how successfully their organisations will embrace and adopt AI, and there must be support for employees and customers to use AI successfully in their jobs.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Building Compliance into Business Culture is Essential in Fintech

Source: Finance Derivative

Tetyana Golovata, Head of Regulatory Compliance at IFX Payments

Regulation plays a critical role in shaping the fintech landscape. From Consumer Duty and FCA annual risk reporting to APP fraud, the tectonic plates of the sector are shifting and whether you consider these regulations as benefiting or hindering the industry, businesses are struggling to keep up. 

According to research by fraud prevention fintech Alloy, 93% of respondents said they found it challenging to meet compliance requirements, while in a new study by Davies a third of financial leaders (36%) said their firms had been penalised for compliance breaches in the year to June. With the FCA bringing in its operational resilience rules next March, it is more important than ever to ensure your company makes the grade on compliance. 

Lessons from history

Traditionally, FX has struggled with the challenge of reporting in an ever-developing sector. As regulatory bodies catch up and raise the bar on compliance, responsible providers must help the industry navigate the changes and upcoming deadlines.

Fintechs and payments companies are entering uncharted waters – facing pressure to beat rivals by offering more innovative products. When regulators have struggled to keep up in the past, gaps in legislation haveallowed some opportunists to slip between the net, as seen in the collapse of FTX. Because of this, implementation and standardisation of the rules is necessary to ensure that innovation remains seen as a force for good, and to help identify and stamp out illegal activity.

Culture vs business

Culture has become a prominent factor in regulatory news, with cases of large fines and public censure relating to cultural issues. As the FCA’s COO Emily Shepperd, shrewdly observed in a speech to the finance industry, “Culture is what you do when no one is looking”.

Top-level commitment is crucial when it comes to organisational culture. Conduct and culture are closely intertwined, and culture is not merely a tick-box exercise. It is not defined by perks like snack bars or Friday pizzas; rather, it should be demonstrated in every aspect of the organisation, including processes, people, counterparties, and third parties.

In recent years, regulatory focus has shifted from ethics to culture, recognising its crucial role in building market reputation, ensuring compliance with rules and regulations, boosting client confidence, and retaining employees. The evolving regulatory landscape has significantly impacted e-money and payments firms, with regulations strengthening each year. Each regulation carries elements of culture, as seen in:

  • Consumer duty: How do we treat our customers?
  • Operational resilience: How can we recover and prevent disruptions to our customers?
  • APP fraud: How do we protect our customers?

Key drivers of culture include implementing policies on remuneration, conflicts of interest, and whistleblowing, but for it to become embedded it must touch employees at every level.

This is showcased by senior stakeholders and heads of departments facilitating close relationships with colleagues across a company’s Sales, Operations, Tech and Product teams to build a collaborative environment. 

Finance firms must recognise the trust bestowed on them by their customers and ensure the protection of their investments and data is paramount. Consumer Duty may have been a wake-up call for some companies, but progressive regulation must always be embraced and their requirements seen as a baseline rather than a hurdle.

Similarly, the strengthening of operational resilience rules and the upcoming APP fraud regulation in October are to be welcomed, increasing transparency for customers. 

Compliance vs business 

Following regulatory laws is often viewed as a financial and resource drain, but without proper compliance, companies are vulnerable to situations where vast amounts of money can be lost quickly.

A case in point is the proposed reimbursal requirement for APP fraud, which will mean payment firms could face having to pay compensation of up to £415,000 per case.

Complying not only safeguards the client and their money, but also the business itself. About nine in ten (88%) financial services firms have reported an increased compliance cost over the past five years, according to research from SteelEye.  Embedding compliance earlier in business cultures can be beneficial in the long run, cutting the time and money needed to adapt to new regulations and preventing the stress of having to make wholesale changes rapidly. 

Building a cross-business compliance culture 

Compliance is a key principle at IFX, and we strive to be a champion in this area. In response to these challenges, the business restructured, establishing dedicated risk and regulatory departments, along with an internal audit function. 

Regulatory compliance aims to support innovation by developing and using new tools, standards, and approaches to foster innovation and ensure product safety, efficacy, and quality. It has helped the firm to navigate the regulatory landscape while driving growth and maintaining high standards.

This organisational shift allowed each business line to own its own risk, with department partaking in tailored workshops designed to identify existing, new, and potential risk exposure. Shared responsibility for compliance is the only way to create a culture which values it. We see this as a great way for organisations to drive innovation while sticking to the rules. 

Continue Reading

Business

How AI virtual assistants are transforming education and training

By Gregor Hofer, CEO and Co-founder at Rapport

What separates good doctors from excellent doctors, the type that might get five-star reviews if, like an Uber driver, their services were supported by a smartphone app?

Medical knowledge, expertise, and better outcomes are, of course, the most important factors. But – particularly when dealing with patients’ relatives, discussing risk assessment and imparting bad news – we shouldn’t underestimate the importance of bedside manner.

This might come naturally to some doctors but there are none for whom training isn’t useful, whether at medical school or on the job.

There will always be a place for real human interaction in this training, the type that involves role-play, with actors or colleagues playing out different scenarios that explore the most effective ways to handle difficult situations.

But what if this could be supplemented by more readily available and less resource-intensive experiences that simulate these training environments? And what if it could be applied across numerous sectors, industries and professions, of which there are a great many that could benefit from such an opportunity?

What might that mean for those instigating tricky conversations and, perhaps more importantly, those at the receiving end of them?

Advances in generative artificial intelligence – or GenAI – mean that these are no longer hypothetical questions.

There’s no limit to the type of person this technology could help, but we’ll review three – doctors, those working in corporate HR, and online students – to give a flavour of the benefits it brings.

Before we do, a quick word on how such applications work.

An overview of the technology

It all starts with data. With access to enough content, the type that you store and curate on your internal systems, large language models (LLMs) can be trained to find the most appropriate response to whatever user input they’re exposed to, whether in writing or spoken, and then you as a user can respond to that response, and so the cycle continues.

You’ll have experienced something similar using the likes of CharGPT, but because this is based on your own content, you’re more in control. (For simpler and more prescriptive scenarios, though, I’d add that with the best solutions, you can alternatively import predefined branching dialogue to keep your conversations on track.)

It doesn’t stop there, though; by tapping into a solution that’s supported by experts in linguistics and computer-aided animation, your colleagues can interact in real-time with avatars equipped with believable facial expressions, accurate lip-synching capabilities, natural gestures and the ability to detect emotions.

All of this adds to the user’s willing suspension of disbelief that they’re interacting with a real person, or AI avatar, thereby enhancing the effectiveness of their learning.

These innovations are reshaping how we approach learning and skill development in so many critical fields. We said we’d look at three. We’ll start by returning to medicine.

Medical training

AI assistants can supplement the way doctors are taught to break bad news to patients, one of the hardest things they’ll face in practice and, given its subjectivity, something that can’t easily be looked up in a textbook on anatomy or physiology.

As we said from the outset, this is easier for some doctors than others, but given the literal life-and-death nature of such conversations and the shattering impact that the death of a loved one can have on a relative, there’s always room to improve medics’ empathy and communication skills – which is exactly what this technology delivers.

By utilizing experiential AI tools, clinicians can better use their time, alleviate pressure, fatigue and burnout symptoms, and ultimately allow them to better serve their patients.

Corporate HR

In corporate HR, virtual assistants can significantly streamline and enhance the hiring and firing process, as well as any difficult conversation; whether it’s a tough review, a disciplinary hearing, letting down an employee about a promotion they’d applied for or any other scenario that might bring a bead of sweat to your forehead, it’s all about providing safe and cost-effective practice before doing it for real.

Tech research consulting firm Gartner recently found that more than three-quarters (76%) of HR leaders believe that if their organisation doesn’t adopt and implement AI solutions, such as generative AI, in the next 12 to 24 months, they’ll lag in organizational success compared to those that do, while 34% of HR leaders participating in their January benchmarking session said they were exploring potential use cases and opportunities when it came to generative AI.

If they do manage to adopt the right technology, the impact will be massive among those who deploy it wisely. After all, which company wouldn’t want to upskill its HR professionals in tangible soft skills such as empathy, communication, problem-solving, and conflict resolution in a controlled setting?

Online education

AI-powered tools can hugely boost student engagement in remote learning environments, and the research suggests that it comes close to rivalling in-person experiences. When you consider the staff-to-student ratios common in most educational settings, this should be no surprise – think how many students can fit into a lecture hall (even if they don’t always turn up!).

But we’re not necessarily talking about formal education; this applies equally to any informal setting in which someone needs to improve their education in some way.

With this technology, you can invent new ways to educate your students – or staff – by transforming lessons into experiences, using interactive characters reflective of the subject. This means you can increase user satisfaction and performance without compromising on content.

Whatever the scenario and whatever the use case, the chances are that if you have the right content in sufficient quantities, you can tap it for interactions that would otherwise be lacking in uniqueness or prohibitively expensive.

With AI virtual assistants, everyone’s a winner.

Continue Reading

Business

How GenAI is Shaping the Future of Compliance

Gabe Hopkins, Chief Product Officer, Ripjar

Generative AI or GenAI uses complex algorithms to create content, including imagery, music, text, and video with amazing results. Less well known are some of the ways in which it can transform data processing and task performance. This groundbreaking technology not only saves time, effort, and money, but has become a game-changer in enhancing operational efficiency and fostering innovation across various sectors.

However, some industries like anti-financial crime compliance – have been slow to adopt new innovations like GenAI, predominantly due to concerns over potential risks. In fact, they can even see it as a risk in itself. Legal, Compliance and Privacy leaders rank rapid GenAI adoption as their top issue in the next two years, all while other, less risk-averse organisations enjoy the upside of implementing GenAI in their systems.

This delay means many compliance teams are not taking advantage of AI tools that could revolutionise their processes and help them save up to 200 hours annually per user.

Entering the New Era of GenAI in Compliance

Teams in largely regulated sectors like banking and fintech face enormous pressures. Their responsibilities include identifying risks, such as sanctioned individuals and entities, updating policies to keep up with ever-evolving regulations, and handling expansive datasets. The high volume of this data makes manual reviews exhausting and susceptible to errors, which can lead to financial and reputational damage.

One way to overcome these challenges is by leveraging GenAI. For example, false positives – where a risk is raised incorrectly or false negatives, where a real risk is not flagged, are common issues caused by trying to deal with very high volumes of alerts and risk matches. Implementing GenAI can reduce these inaccuracies, significantly enhancing the efficiency and effectiveness of customer and counter-party screenings.

In practical terms, GenAI can reinvent how compliance tasks are performed. For instance, in drafting Suspicious Activity Report (SAR) narratives, where analysts need to justify suspicions in transactions, GenAI can help automate this writing process, combining human oversight with artificial efficiency. Platforms using GenAI excel in summarising vast amounts of data— crucial for tasks like screening adverse media, where they assist in identifying potential risks linked to negative information about clients.

 Understanding the Opportunities of GenAI and Overcoming Fears

For the compliance sector, it’s a crucial time to explore how to incorporate GenAI effectively and securely without undue risks. Dispelling fears about data misuse, the high costs of initial model setups, and the ‘black box’ nature of AI models are central to this transition. Teams are particularly cautious about sharing sensitive data and the hidden biases that AI might carry.

Yet, some strategies can counter these challenges. By choosing suitable models that ensure robust security and privacy and adjusting these models within a solid statistical framework, biases can be mitigated. However, organisations will need to turn to external expertise – whether data scientists or qualified vendors – to support them in training and correctly deploying AI tools.

The latest advancements in GenAI suggest that virtual analysts powered by this technology are achieving, and sometimes surpassing, human-level accuracy. Despite ongoing concerns, which may slow adoption rates, the evident potential benefits suggest a bright future for compliance teams using GenAI. These technological innovations promise not only to improve speed and efficiency but also to enhance the capability of teams to respond and adapt swiftly.

Embracing GenAI will not only significantly elevate the effectiveness of compliance operations but also safeguard organisations against potential pitfalls while maintaining trust and integrity in their industry practices.

Continue Reading

Copyright © 2021 Futures Parity.