
AI: Experts already propose guidelines for safe systems
A group of AI experts and data scientists has released a new framework. It discusses the development of artificial intelligence products safely.
The World Ethical Data Foundation is an organisation with 25,000 members. As we read, it includes staff working at various tech companies and institutions such as Meta, Google and Samsung.
The form has been published in the form of an open letter, which can be entirely read here.
Here, they published a framework for open suggestions for safe systems.
In this letter, we read:
"These are the first questions and considerations we believe every AI team and individual builder should take everyday to ensure we are releasing more ethical AI models. They are intended to be communicated in simple language without technical jargon to ensure the process can be understood by every audience and will be translated over time into every local human language where we can find volunteers to support."
"We know we are unlikely to get this completely right, especially when dealing with a technology that requires continuous tracking, but with your help each iteration will bring meaningful improvements. This is version 1 of many future versions that developers can use when assessing our work. The comumnity will help it as we collect more questions from the public and suggestions from the data science community to refine the list and capture the necessary steps that should be taken to clarify and validate our intentions."
Their open letter might be either a brand new suggestion or something, which will be quickly forgotten in the depths of the Internet.
Main source: BBC