U.S. lawmakers continue to make progress in their efforts to address the risks associated with large language models and other forms of artificial intelligence (AI). A bipartisan bill, known as the "Artificial Intelligence Research, Innovation, and Accountability Act of 2023," has recently been introduced by a group of senators. The main objective of this bill is to establish new standards for transparency and accountability in the field of AI. The fact that this bill is co-sponsored by both Democrats and Republicans indicates the bipartisan support for addressing the possible harms posed by AI.
In addition to legislative efforts, several companies are taking their own steps to mitigate the risks associated with AI. IBM, for instance, has just unveiled a new tool called WatsonX.Governance. This tool aims to detect potential AI risks and monitor crucial factors such as bias, accuracy, fairness, and privacy. The introduction of this tool demonstrates IBM's commitment to ensuring the responsible use of AI technology.
Shutterstock is also actively working towards incorporating ethics into its AI platform. To address concerns related to bias, transparency, creator compensation, and harmful content, Shutterstock has launched a framework called TRUST. By implementing this framework, Shutterstock aims to create an environment of trust and accountability within its AI platform.
While self-assessment plays a crucial role, some experts argue that having an external organization overseeing AI standards could provide even better accountability and transparency. However, this approach would require widely agreed-upon standards and cost-effective methods for auditing AI on a timely basis. Such measures could be essential in ensuring that AI technology is developed and deployed responsibly.
Furthermore, marketers express their concerns about striking the right balance between utilizing AI to its fullest potential and avoiding excessive risks. With the increasing role of AI in marketing strategies, marketers need to carefully navigate the potential challenges associated with the technology to maximize its benefits without compromising important ethical considerations.
Overall, U.S. lawmakers and companies within the AI industry continue to collaborate in response to the risks posed by AI. The introduction of the bipartisan bill and the implementation of new tools and frameworks by influential companies like IBM and Shutterstock reflect the commitment to achieving greater transparency, accountability, and ethical use of AI. These collective efforts aim to ensure that the potential benefits of AI are realized while minimizing possible harms.In a statement, Nick Primola, ANA Global CMO Growth Council's group executive vice president, emphasizes the importance of taking the lead in addressing areas such as privacy, misinformation, brand safety, and transparency. He believes that as an industry, falling behind and having to catch up would be a disgrace. Despite the learnings from digital and social platforms, the industry has been lagging in these areas for several years.
While YouTube and Meta have now introduced requirements for disclosures, experts advise that identifying AI-generated content is not always a straightforward task. Nevertheless, Google and Meta's actions are generally viewed as a positive step forward. Alon Yamin, co-founder of Copyleaks, a company utilizing AI technology to detect AI-generated text, points out that detecting AI is similar to antivirus software in that not everything can be detected, despite the presence of tools. However, Yamin suggests that examining text-based transcripts of videos and implementing pre-upload authentication methods could prove helpful in addressing this challenge.