Founder of Facebook, Mark Zuckerberg has presented a plan to let artificial intelligence (AI) review the content posted on the social network.

While describing the road-map, Zuckerberg claimed that the Facebook algorithms would be able to spot out bullying, violence, terrorism and even those with suicidal thoughts. He also admitted that previously, many times when some specific content was wiped off from the social network, it was a mistake.

He also said it would take years of hard work for such algorithms to be developed, the ones that review content and approve it on Facebook.

Talking about the use of AI in the past, Mark admitted that it was not possible to review the billions and billions of posts and messages that appear on the website every day.

The complexity of the issues we’ve seen has outstripped our existing processes for governing the community.

He also claimed that Facebook is monitoring the site and researching systems that can read text and look at photographs and videos in order to predict in case anything dangerous might be happening.

Facebook introduces Ad Breaks into Videos

This is still very early in development, but we have started to have it look at some content, and it already generates about one third of all reports to the team that reviews content. Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda.

Personal Newsfeed Filteration

Mark claimed that his company’s goal was to allow the Facebook users to post generally regarding whatever they liked or disliked, as long as the content is within the law. Later, with the help of algorithms, things could be more automated by detecting what content has been uploaded, and having it withstand scrutiny by AI.

Where is your line on nudity? On violence? On graphic content? On profanity? What you decide will be your personal settings, for those who don’t make a decision; the default will be whatever the majority of people in your region selected, like a referendum. It’s worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more. At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years.

The plan was welcomed by the Family Online Safety Institute, which is a member body of Facebook’s own security advisory board.