If there is a reasonably reliable way of doing it with an automated process then, fair enough, but I suspect that there are so many ways of expressing 'hostile' concepts that a human brain might be required.A mailing list I am on screens incoming messages for words and phrases which suggests the message is hostile to a person rather than hostile to that person's opinion. Mesages considered hostile to a person are returned to the author with the trigger word or phrase highlighted for editing before they are released to the list members.
Of course, there could be one or two people who believe that hostile comments can be justifiable, and therefore should not be 'censored'!
Kind Regards, John