UK Must Regulate Artificial Intelligence Like Nuclear Power
The digital spokesperson for the UK’s Labour Party, Lucy Powell, wants artificial intelligence (AI) to be regulated like medicine or nuclear power.
She argued that regulators must control the widespread use of large language models like OpenAI’s through a licensing regime.
Minister Proposes to Regulate Rather Than Ban AI
According to Powell, regulators should look to the governance of nuclear power and medicine for clues on how the UK could regulate AI development.
UK authorities have had to hastily amend their approaches to regulation given the exponential growth in the capabilities of large language models.
British prime minister Rishi Sunak is traveling to the US later this week to argue that the UK should be at the forefront of AI regulation.
Labour leader Keir Starmer will meet with top Google AI executives this week. Google recently announced a watermarking feature that will help distinguish real from fake.
A new metadata feature will imbue files with data that will help to link them to an original context.
AI is also on the radar of European Union regulators, who warn that companies using large language models like Google’s Bard or OpenAI’s ChatGPT should build safeguards to prevent the spread of misinformation.
Vera Jourova, deputy head of the European Commission, said on Monday,
“Signatories who have services with a potential to disseminate AI generated disinformation should in turn put in place technology to recognize such content and clearly label this to users.”
She went on to say that these companies must also employ technology to identify fake content.
For the latest on EU crypto regulation click here.
UK Advisor Says Time is Running Out
On Monday, Sunak’s AI advisor, Matt Clifford, warned that AI could kill humanity within two years. He added that the pace of current AI development means regulators need to expedite the new rules for the industry.
“You can use AI today to give you recipes for bioweapons or to launch large-scale cyberattacks,” he said.
Clifford went on to say that one of the challenges for regulating AI is understanding how it works. He said that in his findings, even companies that developed AI tools do not know how they generate answers.
“Even when the code is open-source…this doesn’t give us the transparency. Its importance not to conflate… openness and accessibility with understanding.”
Clifford advocates that audits and evaluations of AI models must be part of a regulatory regime.
To learn about crypto trading bots, click here.
The AI task force adviser added that even if countries like China and Russia do not share the same ideas about AI safety with the rest of the world, the potential of AI to improve lives will likely be a shared goal.
Rules that promote this goal should be encouraged.
Elon Musk said in a recent interview with the Wall Street Journal that artificial intelligence could usher in world peace.
Musk was one of several prominent signatories of an open letter calling for halting AI training pending sound governance. Coinbase CEO Brian Armstrong recently argued that no one was qualified to provide proper governance.
For BeInCrypto’s latest Bitcoin (BTC) analysis, click here.
In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content.