Top AI firms pledge 'responsible' tech development

On the final day of the Seoul AI summit, fourteen companies, including South Korea's Samsung Electronics, sprawling tech giant Naver and America's Google and IBM, agreed to "minimise risks" as they push the cutting-edge field forward.

South Korean Foreign Minister Cho Tae-yul at the AI Global Forum in Seoul, South Korea, May 22, 2024. / Photo: AP
AP

South Korean Foreign Minister Cho Tae-yul at the AI Global Forum in Seoul, South Korea, May 22, 2024. / Photo: AP

More than a dozen of the world's leading artificial intelligence firms pledged at a global summit to develop and use their technology safely, as concern rises over the lack of safeguards for ChatGPT-style AI systems.

Fourteen companies, including South Korea's Samsung Electronics, sprawling tech giant Naver and America's Google and IBM, agreed on Wednesday, the final day of the Seoul summit. to "minimise risks" as they push the cutting-edge field forward.

"We commit to continuing to advance research endeavors to promote responsible development of AI models," they said in the Seoul AI Business Pledge.

The companies also promised to "minimise risks, and enable robust evaluations of capabilities and safety".

Read More
Read More

Council of Europe adopts first global treaty on Artificial Intelligence

The two-day summit, co-hosted by South Korea and Britain, gathered top officials from global AI companies such as OpenAI and Google DeepMind to find ways to ensure the safe use of the technology.

Their commitment builds on the consensus reached at the inaugural global AI safety summit at Bletchley Park in Britain last year.

Under their new pledge, the companies also agreed to help socially vulnerable people through AI technologies, although it gave no details on how this would be achieved.

Sixteen tech firms, including ChatGPT-maker OpenAI, Google DeepMind and Anthropic, also pledged on Tuesday to make fresh safety commitments that included sharing how they assess the risks of their technology.

That includes what risks are "deemed intolerable" and what the firms will do to ensure that such thresholds are not crossed.

The stratospheric success of ChatGPT soon after its 2022 release sparked a gold rush in generative AI, with tech firms around the world pouring billions of dollars into developing their ow n models.

Such AI models can generate text, photos, audio and even video from simple prompts and its proponents have heralded them as breakthroughs that will improve lives and businesses around the world.

However, critics, rights activists and governments have warned that they can be misused in a wide variety of ways, including the manipulation of voters through fake news stories or "deepfake" pictures and videos of politicians.

Read More
Read More

EU parliament adopts world's first rules to govern Artificial Intelligence

Route 6