Are You Having A Technology Emergency?

Aspire Technical Blog

Aspire Technical has been serving the Phoenix Metro area area since 2000, providing IT Support such as technical helpdesk support, computer support and consulting to small and medium-sized businesses.

A Consortium of AI Companies Have Committed to Risk Reduction

A Consortium of AI Companies Have Committed to Risk Reduction

Back in July, the White House secured commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to help manage the risks that artificial intelligence potentially poses. More recently, eight more companies—Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability—also pledged to maintain “the development of safe, secure, and trustworthy AI,” as a White House brief reported.

Let’s explore why this is so important, especially as AI continues to develop.

The Plan: AI-Generated Content Will Be Watermarked

As beneficial as artificial intelligence has proven to be, it has also proven to be a tool for cybercriminals and other threat actors to use to their advantage to great effect. From these tools being used to create deepfaked images to replicated voices scamming people out of thousands of dollars, there are countless ways that AI can potentially be weaponized by threat actors using legitimate tools.

This is why the Biden White House is pushing for these companies to create the technology needed to watermark AI content in such a way that the platform used to create it can be identified. The theory is that these watermarks would help prove whether an AI platform was involved in creating content, helping to spot potential threats and confirming that these platforms are being built and innovated upon to spot them more effectively.

In addition to the watermark, other safeguards have been agreed to by the technology firms:

  • Investments will be made into cybersecurity to protect the essential data that powers AI models
  • Independent experts will be charged with testing AI models before they’re released to ensure that major risks associated with AI are accounted for in their security
  • Research into the risks AI places to society at large, such as bias and inappropriate use, will be conducted and any identified instances will be flagged
  • Third parties will be more able to discover vulnerabilities and report them so they can be resolved
  • These firms and companies will also share all AI risk management-related data with others, as well as society at large and academia.
  • These firms have also committed to disclosing their security risks and the risks their products pose to society, including their bias.
  • These firms have also committed to creating AI that tackles some of society’s largest, most pressing issues.

Granted, these standards and practices aren’t enforceable by the government, but they serve as an invaluable first step towards more secure artificial intelligence.

We Can Help Secure Your Business Against Today’s Threats

We’ve long been committed to fulfilling business IT needs, particularly in regard to cybersecurity. Give us a call at (480) 212-5153 to find out what we can do for you and your operations.

Remember Tape Backup? It’s Still Alive and Kicking
How to Properly Recycle Old Technology and Devices
 

Comments

No comments made yet. Be the first to submit a comment
Guest
Already Registered? Login Here
Guest
Sunday, 22 December 2024

Captcha Image

Contact Us

Learn more about what Aspire Technical can do for your business.

Call Us Today
Call us today
(480) 212-5153

1273 W. Morelos St.
Chandler, Arizona 85224

Latest Blog

Having tools that help enhance your ability to support your customers is rarer than you may expect. One of the best tools a lot of businesses employ is Customer Relationship Management (CRM). The CRM system can transform how a business oper...
TOP