Washington Debates On How To Regulate Artificial Intelligence Before It’s Too Late

Washington is now filled with AI excitement and anxiety. Politicians from both parties are now focusing to regulate Artificial Intelligence which has taken Silicon valley by storm. Legislators are now closely monitoring the AI race that is being sparked by the meteoric popularity of ChatGPT (OpenAI’s chatbot). The technology’s astounding capacity for human like conversations, image descriptions, coding abilities and many more has stunned its users but also raised fresh worries about Children’s online safety and false information.

Regulate Artificial Intelligence

How to regulate Artificial Intelligence

Officials held meetings to debate over how to govern and regulate the technology industry before it’s too late and expecting to respond fast and prevent any negative consequences that may impact society.

President Biden, recently conducted a meeting on the risks and opportunities of artificial Intelligence.

He heard from a range of Council advisors on Science and Technology including CEO’s from Google and Microsoft. President Biden, in the meeting told members of the council to “make sure their products are safe before making them public”.

When asked about the question of whether AI was dangerous. He stated that there was no clear answer to the question. “Could be” he replied.

The federal Trade commission and Justice Department, the two major regulators of Silicon Valley, have indicated that they are keeping an eye on the developing AI industry.

The FTC recently warned AI development authorities and businesses that they risk fines if they fraudulently overstate the benefits of AI technologies and fail to consider dangers before release.

Jonathan Kanter, the top antitrust enforce for the justice Department, announced last month that his office has started a project named “Project Gretzky” to stay ahead of competition concerns in the markets of Artificial Intelligence.

Many enforcement agencies in different countries with extensive privacy laws are already thinking about how to apply regulations to ChatGPT.

The commissioner for privacy announced this week that the device will be subject of an investigation. The statement followed Italy’s decision to outlaw the chatbot last week due to worries that it violates laws intended to protect the privacy of European Union citizens. Germany is also considering a similar step.

Open AI, in a blogpost responded to the increased scrutiny and explained the steps it is taking to address AI safety. Such as limiting the amount of sensitive data about individuals it uses to train the AI models.

In the meanwhile, Lieu is drafting legislation to establish a federal body to regulate the technology.

He also raised his concerns that it will be difficult to win support for a new federal agency.

Tristan Harris, well known cyber ethicist also visited Washington in recent weeks to meet officials from Biden administration as well as lawmakers of both parties on Capitol Hill. Last month, Harris gathered a group of influential D.C figures to discuss the approaching catastrophe.

Harris urged legislators to take drastic measures to slow down the rollout of AI. He said that AI is likely to become more integrated much faster, and we can leverage the power of this technology and update our institutions by facing the issue now, before it’s too late. The comment appears to have attracted the interest of some wary lawmakers.

The message seems to have struck a chord with some cautious lawmakers.

Senator Chris Murphy after seizing on a clip from Harris and Raskin’s “AI in Dilemma” video tweeted that chatGPT “taught itself advanced Chemistry” and had developed humanlike capabilities.

To this, Timnit Gebru, the former co-leader of Google’s group focused on Ethical Artificial Intelligence, responded, “Please do not spread misinformation.” “Our job countering the hype is hard enough without politicians jumping in on the bandwagon.

“Policymakers and technologists do not always speak the same language.” Tristan Harris wrote in an email. However, Harris does not claim that Chatgpt taught itself chemistry in his presentation, he did reference a study that revealed the chatbot has Chemistry skills that no human programmer purposefully provided the system.

Murphy’s tweet was criticized by numerous business leaders and professionals, he revealed in an interview. He claims that he is aware that AI is not sentient and cannot learn itself, but he tried to present chatbots in an approachable way.

Why is it important to regulate Artificial Intelligence?

It is important to regulate Artificial Intelligence because the rapid development and adoption of artificial Intelligence could have significant impacts on society. Such as Job displacement, biases in decision making, privacy issues and other potential negative consequences. Without proper AI regulation, it may lead to unintended harmful consequences.

How to regulate Artificial Intelligence?

Regulating Artificial Intelligence can be a complex process and can involve a range of approaches, like introducing new Laws, guidelines and policies. Some potential strategies to regulate Artificial Intelligence includes ethical standards, transparency and accountability measures, promoting awareness and establishing oversight.

Leave a Comment