Facebook-owned Instagram has launched a new feature that warns users about the nature of captions. The Artificial Intelligence-powered tool will send an alert if users post an offensive caption. The anti-bullying tool will encourage users to post captions that are not unpleasant and hurtful. The Caption Warning feature will apply to all captions on photos and videos. The feature will automatically flag a notification that the caption looks similar to others that have been reported. According to the photo and video-sharing social networking site, users will be given an option to filter the caption. However, users will also be at liberty to post it without any change if they feel the words are in accordance with the set rules.
The new feature was introduced by the company in July this year. However, it was limited to offensive comments. Now, the social media platform has expanded the service to captions on images and videos. The move is seen as a part of the company’s ongoing fight against online bullying. It said the new tool will discourage people from posting offensive content. The company noted it will not block users from publishing their words instead ask them to revise. Besides, the company said the new feature will also educate users about what sort of content is not allowed on its page. With this, users will also get to know when their account could be at risk of violating rules and invite punitive action from the company.
Instagram has already put guidelines in place for users to follow and operate their accounts. If they are found violating the rules, the company is at liberty of blocking such accounts. Instagram has already rolled out several features in the last few years to check online bullying. The Caption Warning is just another new feature in the league. Users have already an option to turn off comments on selected posts, filter comments, mute accounts, remove followers and more. Instagram has more than a billion active users worldwide. It was released in October 2019.