31478 Industrial Road Suite 200, Livonia, Michigan 48150 sales@xfer.com

XFER Blog

XFER Blog

XFER has been serving the Livonia area since 1994, providing IT Support such as technical helpdesk support, computer support, and consulting to small and medium-sized businesses.

A Consortium of AI Companies Have Committed to Risk Reduction

A Consortium of AI Companies Have Committed to Risk Reduction

Back in July, the White House secured commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to help manage the risks that artificial intelligence potentially poses. More recently, eight more companies—Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability—also pledged to maintain “the development of safe, secure, and trustworthy AI,” as a White House brief reported.

Let’s explore why this is so important, especially as AI continues to develop.

The Plan: AI-Generated Content Will Be Watermarked

As beneficial as artificial intelligence has proven to be, it has also proven to be a tool for cybercriminals and other threat actors to use to their advantage to great effect. From these tools being used to create deepfaked images to replicated voices scamming people out of thousands of dollars, there are countless ways that AI can potentially be weaponized by threat actors using legitimate tools.

This is why the Biden White House is pushing for these companies to create the technology needed to watermark AI content in such a way that the platform used to create it can be identified. The theory is that these watermarks would help prove whether an AI platform was involved in creating content, helping to spot potential threats and confirming that these platforms are being built and innovated upon to spot them more effectively.

In addition to the watermark, other safeguards have been agreed to by the technology firms:

  • Investments will be made into cybersecurity to protect the essential data that powers AI models
  • Independent experts will be charged with testing AI models before they’re released to ensure that major risks associated with AI are accounted for in their security
  • Research into the risks AI places to society at large, such as bias and inappropriate use, will be conducted and any identified instances will be flagged
  • Third parties will be more able to discover vulnerabilities and report them so they can be resolved
  • These firms and companies will also share all AI risk management-related data with others, as well as society at large and academia.
  • These firms have also committed to disclosing their security risks and the risks their products pose to society, including their bias.
  • These firms have also committed to creating AI that tackles some of society’s largest, most pressing issues.

Granted, these standards and practices aren’t enforceable by the government, but they serve as an invaluable first step towards more secure artificial intelligence.

We Can Help Secure Your Business Against Today’s Threats

We’ve long been committed to fulfilling business IT needs, particularly in regard to cybersecurity. Give us a call at 734-927-6666 / 800-GET-XFER to find out what we can do for you and your operations.

Professional Perspectives Can Transform Your Abili...
How to Properly Recycle Old Technology and Devices
 

Comments

No comments made yet. Be the first to submit a comment
Guest
Already Registered? Login Here
Saturday, 22 June 2024

Captcha Image

Customer Login


Cybersecurity Risk Assessment

cybersecurity-audit

Our risk assessment will reveal hidden problems, security vulnerabilities, and other issues lurking on your network.

Request Yours Today!

Contact Us

Learn more about what XFER can do for your business.

XFER Communications, Inc.
31478 Industrial Road Suite 200
Livonia, Michigan 48150