AI chatbots have been known for generating racially-prejudiced solutions quite often when subjected to various questions, and many research efforts have been targeting this problem. Now, a new training method is ready to iron out this issue. The method is known as "fair deduplication" or just "FairDeDup" and comes as the result of research conducted by a team from Adobe and OSU College of Engineering's doctoral student Eric Slyman.
Deduplicating data sets used in AI training consists of removing redundant information, thus lowering the costs of the whole process. For now, the data used comes from all over the internet, so it contains unfair or biased ideas and behaviors that humans often come up with and share online.
According to Slyman, "FairDeDup removes redundant data while incorporating controllable, human-defined dimensions of diversity to mitigate biases. Our approach enables AI training that is not only cost-effective and accurate but also more fair." The list of biased approaches perpetuated by AI chatbots these days includes occupation, race, or gender, but also age, geography, and culture-related ideas that are obviously unfair.
FairDeDup is an improved version of an earlier method known as SemDeDup, which often exacerbated social biases, although it proved to be a cost-effective solution. Those interested in this field should grab Kris Hermans' Mastering AI Model Training: A Comprehensive Guide To Become An Expert In Training AI Models, which is currently available on Kindle for $9.99 or in paperback version (for $44.07).
Working For NotebookcheckAre you a techie who knows how to write? Then join our Team! Wanted:- News WriterDetails hereDisclaimer: All resources provided are partly from the Internet. If there is any infringement of your copyright or other rights and interests, please explain the detailed reasons and provide proof of copyright or rights and interests and then send it to the email: [email protected] We will handle it for you as soon as possible.
Copyright© 2022 湘ICP备2022001581号-3