NEW YORK, July 11 -- Instagram has launched new measures to prevent bullying online, including a novel use of artificial intelligence to catch offensive messages prior to posting. Bullying on social media, particularly among youth, has been seen in Japan and many other countries around the world, with online problems sometimes escalating to crime or suicide. Noting that it has endeavored for years to reduce bullying via AI that detects harmful comments, photos and videos, the Facebook-owned platform said, "We started rolling out a new feature powered by AI that notifies people when their comment may be considered offensive before it's posted." Calling bullying "a complex issue," Instagram said in a release on Monday, "We can do more to prevent bullying from happening on Instagram, and we can do more to empower the targets of bullying to stand up for themselves." The new tool "gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification," Instagram said, adding that teens are unlikely to report online bullying even though they experience it the most. Instagram said it will also test a new method called "Restricted" to protect a user's account from unwanted interactions. "Once you restrict someone, comments on your posts from that person will only be visible to that person. You can choose to make a restricted person's comments visible to others by approving their comments." Under the new feature, restricted people will not be able to see "when you're active on Instagram or when you've read their direct messages," the operator said. Author: Lora Smith
0 Comments
Leave a Reply. |
Thank you for choosing to make a difference through your donation. We appreciate your support.
This website uses marketing and tracking technologies. Opting out of this will opt you out of all cookies, except for those needed to run the website. Note that some products may not work as well without tracking cookies. Opt Out of CookiesCategories
All
Archives
April 2024
|