Follow Us

Growth of AI in 2023 Promoting Development of Governance

Share on facebook
Share on twitter
Share on linkedin

Share

AI
Share on facebook
Share on twitter
Share on linkedin

2023 has marked itself as a year for the growth of AI, as a result, 2024 is likely to witness announcements regarding governance that can guide the usage of AI and its applications in various segments.

In the latest episode of the World Economic Forum podcast Radio Davos, a question was asked from the guests. “If 2023 was the year the world became familiar with generative AI, is 2024 the year in which governments will act in earnest on its governance?”

The attendees of the show included:

Alexandra Reeve Givens, CEO of Centre for Democracy and Technology, a nonprofit organization.

Aidan Gomez, co-founder and CEO of Cohere, an AI enterprise company

Anna Makanju, Vice President at OpenAI, Global Affairs

After raising this question, the whole discussion went around factors supporting the development of AI and also the factors that are raising the need to govern and regulate this booming technology.

What raised the need for AI governance?

The AI index at Stanford University reflects the interest of policymakers in AI, looking at the growth of the technology. In around 127 countries, the number of bills passed and introduced into law grew to 37 in 2022 as compared to 1 in 2016. These bills pertain to artificial intelligence. This number justified the need for regulations and governance and Aidan Gomez has agreed to the data and the need. He added that it is highly important to analyze the way to reach the desired level of governance. 

Notably, the European Union is also at the final stage of developing and introducing its AI Act.

He also highlighted the issues that are involved in regulating horizontal technology like language as it affects all the vertical industries that have human and language interaction. 

How can the regulations be implemented?

He recommended regulating the industry in a vertical layer while simultaneously helping existing policymakers and regulators. It is important to get smart about generative AI and its implications. It will help analyze vulnerabilities in the context.

In implementing global regulation that has international consensus, coordination is very important. The objective of introducing regulation is to empower growing innovation, especially for smaller companies.

AI Implementation in Presidential Election 2024

With the growth and advancements in AI, the potential for misuse of the technology has also increased. In the 2024 presidential elections, more than 2 Billion voters worldwide are expected to vote, highlighting the risks such as disinformation and deep fakes threatening to disrupt the democratic process.

Most of the renowned AI companies have signed the Tech Accord to Combat Deceptive Use of AI in the 2024 Elections. It has a set of commitments to restrict the use of fake images, videos and audio of political candidates.

Alexander Reeve, another attendee, has, however, said that deep fakes are just a part of the immediate puzzles. In the long term, it is important to consider the role of economic inequality in democracy.

She also attracted attention to the most talked about concern of users of AI in replacing human jobs. She added that AI will not only be used for doing manual jobs but it can also be involved in the selection process for floor jobs. 

AI is expected to perform activities such as approving loan applications and segregating job applications. She added that AI is making its way into all these functions, which will increase human dependency on the technology. Due to all these increased applications and adoptions, policymakers and companies need to pay attention to the technology.

Discussions are already going on between governments, companies and civil society on how to deal with AI. They are also pointing towards the need for implementing regulation this year itself.

The authorities engaged in discussions on short-term and long-term vulnerabilities in 2024, the discussions have now reached maturity, with the attendees finally agreeing on the importance of both.

Conclusion: Integration of AI and Regulations

Anna Makanju addressed both the international consensus on catastrophic risk around AI and self-regulation within companies.

She emphasized Red-teaming, which is a common practice in the industry. It is a process used in cybersecurity and software development to analyze risks and evaluate activities in the system.

They have implemented this evaluation strategy into their models, highlighting immediate risks and their expected impacts. It also discusses ways to avoid the vulnerabilities of these risks.

Four of the biggest AI companies, including Anthropic, Google, Microsoft and OpenAI, have formed the Frontier Model Forum to share safety insights and expertise. This forum is committed to introducing AI safety research and forging public-private interaction to ensure the development of AI. 

It will identify a common set of practices that will feed into the regulation and help decide the threshold. The companies that are building these regulations have most likely used or developed AI products. 

These companies have the most detailed understanding of the technology. The speakers collectively suggested that no company should be trusted to self-regulate. 

In the end, the AI Governance Alliance of the World Economic Forum also provides a forum for governments, industry leaders, academic institutions and civil society organizations to openly address issues around AI to help ensure the delivery and design of responsible systems.

Leave a Reply

Your email address will not be published. Required fields are marked *

Download our App for getting faster updates at your fingertips.

en_badge_web_generic.b07819ff-300x116-1

We Recommend

Top Rated Cryptocurrency Exchange

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00