top of page

Is it time to hit pause? The Rise of AI, Mitigation of Risks and a move towards Responsible AI

In 2023, the ascent of AI technology has exceeded all previous projections, becoming an integral part of daily life across industries. However, with its exponential growth comes a chorus of voices, including influential figures like Elon Musk and Gary Marcus, calling for a six-month pause on the development of AI systems with human-competitive intelligence, citing profound risks to humanity.

The rise and popularity of AI technology in 2023 continues to surpass previous expectations and timelines, and what was perceived to be Future Technology in 2022 is making its way to the mainstream and daily use in 2023. There is no doubt that there are enormous implications and arguments to make for the case for using AI as an enabling partner across multiple industries. However, highly influential industry voices like Elon Musk, the cognitive scientist Gary Marcus and Apple co-founder Steve Wozniak are among more than 1800 signatories who recently called for a six-month pause on the development of systems "more powerful" than that of GPT-4 ( The reason for the proposed pause: "AI systems with human-competitive intelligence pose profound risks to humanity."

Risks of AI

If that sounds serious, it is because it is true. Some of the key risks associated with AI technology include (but not limited to):

  • invasion of personal data

  • risks of cyberattack

  • bias and discrimination

  • opacity and lack of transparency

  • lack of regulation at a national and global

  • accountability of AI-driven decisions

  • job losses and humans being replaced by AI on a wide variety of industries and sectors

There are significant concerns over artificial intelligence, which seem to strengthen the argument for those calling out for regulation of future technology which has the capacity to evolve into something that humans cannot regulate or contain. Those who are asking for regulation & pause on the advancement of artificial intelligence systems "more powerful" than that of GPT-4, believe that the future systems will be even more dangerous. We are finding ourselves in the same fix that Prometheus stealing fire from the Olympus Gods, also found himself in. What closely followed so far as the story goes, is the opening of Pandora's box. In the case of AI, the caution is that the box could be full of powerful, unregulated, uncontained and human-competitive future technology which gets released into the world with dire consequences to humanity.

Potential Solutions

While some of the most influential names in technology are calling for a pause of AI technology development for a period of 6 months, there are also highly influential technology leaders such as Andrew Ng, who publicly expressed his opposition to the pause on AI development. Ng's premise of mitigating the risks around keeping up the pace of development and for creating responsible AI is to involve humans in the training and development process. Human nature and its proven fallibility and bias means we need something more than simply involving humans in the training and development process of creating responsible AI, who can partner alongside humans and benefit humanity in the long term as a human-created technology. Here are several potential solutions to mitigate the risks of artificial intelligence:

  • Developing national and international regulations and governance

  • Developing organizational standards for use of and applying of AI

  • Making AI part of company culture, strategy, roadmaps and embedding into the operating model and processes and appointment of people with AI capabilities

  • Responsible AI practices that build the trust gap between AI systems and their users with best practices, tools, processes, and people that can control and manage AI systems

  • Monitoring, reporting and continuous improvement programmes

Despite the rapid rise of AI capabilities, operationalising of AI capabilities in countries as well as organisations and businesses remain notably difficult, with a recent PwC global Responsible AI survey of more than 1000 participants, concluding that only 6% of organisations have been able to operationalise AI. Is your organisation considering AI capabilities as part of its operating model, core services and offerings or looking into integrating AI as an enabling partner?


Reproduced with the permission of Rachel Bandara #1 Best Selling Author | Master Coach | Podcaster | Agency Owner | AI x Human Educator

17 views0 comments

Recent Posts

See All


bottom of page