The latest decision has come from the Google front regarding the deepfake trend, which is growing like an avalanche on the internet. Here are the details…
The world-famous technology giant Google has made its final decision regarding the appearance of sexually explicit content created with artificial intelligence in search results. Accordingly, the company has officially announced that non-consensual ‘deepfake’ content will not be included in search engines and will be blocked as soon as it is noticed.
But what if Google can’t remove deepfakes?
Google has announced that content created with deepfakes in search results will be instantly removed from search pages. However, some images may not be completely removed from search results for technical reasons. Google has also produced a solution to this issue.
Although Google has tried to use its own AI-powered images for search results, these images do not contain real people and do not contain explicit content. The company said it is collaborating with experts and victims of non-consensual deepfakes to address this issue and make its system stronger.
Google has been allowing people to request the removal of explicit ‘deepfake’ content for some time. Receiving a justified request, Google’s algorithms would query and filter out obscene filters that looked like this person, and if necessary, remove them instantly.
With the new study by Google, if these images cannot be removed from search results, at least their visibility will be reduced. In this way, the personal rights of the person will be further protected.
What do you think about this? Do you think the company is right in its decision to put such images in the background in search results? You can share your opinions with us in the comments.
You may also like this content
- Meta Building World’s Fastest AI Supercomputer for Metaverse
- Artificial Intelligence Will Make Decisions Instead Of People
- Bill Gates: Artificial Intelligence Over Web3 and Metaverse