about a 3 minute read
Is everything really better in moderation? We think so.
With the rise of parents and community members turning to Facebook and Twitter to share their often colorful points of view with others, so arises the need for ongoing dialogue on the appropriate balance between moderation and censorship.
On one hand, moderating day to day online feedback by removing negative comments from product and hotel reviews seems like the right thing to do. On the other hand, leaving negative comments up and offering a public apology or explanation can often be much more powerful. After all, no one is perfect, and reasonable people get that. There is a direct correlation between “brands” demonstrating that they are willing to be transparent and vulnerable, and an increase of public trust and confidence: which tends to lead to increased brand-loyalty, not less.
So what about when parents say negative things about teachers? What types of comments are fair and ultimately represent an opportunity for freedom of speech, versus those that are defamatory and require thoughtful moderation to protect staff, children and community members from hurtful or even hateful language?
Through our own process of arriving at a current best practice for moderating comments, we have learned a lot as we work with public leaders to facilitate large-scale online engagement processes. Below are a few key learnings that help to guide our approach to moderation. They may also come in handy as leaders and their communications teams moderate their own social and community platforms:
Be clear in intention and be willing to be flexible
It’s better to have an approach than a “policy”
It’s equally as important to consider the interests and rights of the people giving feedback, as well as those receiving it
It’s critical to align moderation activities with moderation expectations, and to clearly communicate them before and during the process
Keep as many comments “intact” as possible
Putting this to Practice
With a guiding belief that excessive moderation both decreases the range of perspectives available to inform decisions and can lead to participants feeling censored – and a view to limit the number of moderation criteria in place – we’ve come up with two clear criteria. Clearer criteria also results in more consistency across Moderators and engagements.
Our criteria is simply this: participants adding their comments are initially presumed innocent of malice; however, if their language meets one of the two criteria below; the thoughts will be flagged and removed:
Thoughts that are rude or hurtful to a person or a group of people
Thoughts that do not answer the question
The language choices in criteria #1 are deliberate, taking our cues from the experts on using grade 3 level language, to ensure maximum accessibility and inclusivity. For this reason, the words “rude” and “hurtful” were chosen; with “rude” being a simple synonym for “inappropriate”. Something can be rude (i.e. written in four-letter words you would not say in front of most people’s granny) without targeting a specific person or group of people. When something is hurtful it may cause pain to a person or group of people on an emotional level, whether intended or not by the participant who contributed the thought.
Criteria #2 is in place to reduce the unnecessary clutter of comments participants need to wade through. If the comment don’t answer the question (i.e. “I don’t know” or emoticons) it doesn’t add any real value to the conversation.
So in our practice – and in yours if you choose to adopt this thinking – we need only ask ourselves two things before we consider striking the comment: is it “rude or hurtful to a person or a group of people” and “does it answer the question”?
A Nuanced Practice
As with all other good plans, the devil is in the detail. And the art of moderation, or science, depending on how you see it, is no different. The detail here being the nuances and subtleties of language and interpretation. Will any of us acting as Moderators get this right 100% of the time? Probably not. But using clear, consistent criteria should get us awfully close.