Moderating internal discussion forums, blogs and other social media
When starting intranet-based online discussions in an organisation, whether they are forums, blogs, comments on articles, or internal Twitter-style tools, how should you go about moderating them and maintaining their effectiveness? Where do you begin with writing policies and guidelines on use? Should you even monitor the discussions at all?
These questions come up frequently for intranet and internal communication teams. This article will outline:
- some best practice examples of policies and guidelines
- several different models and approaches for moderating online discussion
- the need for a clear post-launch strategy to encourage discussion
Fundamental to success
One point to recognise is that good governance and moderation is vital to the success of these platforms. But there’s no one-size-fits-all solution, either.
Many organisations share similar concerns about online employee discussion. But the differences between organisations — in size, culture, history and employee experience — are important factors in whether and how moderation is implemented.
The first step with any online discussion platform should be to establish some guidelines about what will and will not be acceptable. For example, any abusive language and explicit content are obvious exclusions, but just how far will you let something such as criticism go? Allowing people to complain endlessly about not having a vending machine on their floor is asking for trouble, but allowing genuine problems to surface could well be a primary aim. Such questions need to be addressed at an early stage of planning.
Good governance and moderation is vital to success
When writing the policy or guidelines, first be sure to review some that have already been written. IBM, Sun Microsystems and Intel have all published their blogging/social media/public discourse policies to the web and they’re a great place to start. Don’t worry that these are all technology companies — they have similar employee concerns to any other company, and perhaps more so (IBM has over 300,000 employees globally).
Guidance from the best
In fact, IBM and Sun have been envelope pushers with social media in large organisations. Their employees started to blog (publicly, in Sun’s case) long before it became the norm. In this area, whatever these two companies do first, other companies follow.
For example, IBM’s original ‘blogging guidelines’ (see sidebox, page 2) which have now morphed into the broader ‘social computing guidelines’, are also noticeable for having been created collaboratively by employees using a wiki. This has been replicated in many organisations since and is frequently cited as best practice. The guidelines themselves have formed the basis of many other social media policies.
Further examples with different perspectives can be found from the BBC, BT Group and Telstra.
BT and Telstra, both telecommunications firms at heart (although BT does significantly more than that these days), share many similarities.
Social media in government
A great example of government guidelines can be found from the Australian Public Service Commission. The focus here, as with the BBC guidelines, is on online ‘participation’ as a phrase. Government agencies can be particular daunting environments for social media proponents. The very idea of ceding control and allowing unfiltered commentary and discussion internally or externally remains a bridge too far for many departments.
IBM’s guidelines were developed with staff via a wiki
Will existing policies be suitable?
What many of these custom policies and guidelines boil down to is one simple directive: ‘Don’t be stupid’ (or, alternatively, ‘be professional’).
Beyond a specific policy for social media, it’s a given that your organisation already has contractual policies around staff business conduct, confidentiality, public discourse and so on. Employees can and do behave in a professional manner.
Can you use these policies? Several companies have done just that, refocusing attention on their existing staff code of conduct. In many ways this can be sufficient to govern any new contributions to the company intranet — and why shouldn’t it be? Being offensive to a fellow employee, giving away trade/business secrets… these have been disciplinary or sacking offences for some time. There’s no reason why it should be any different now.
Lastly, recognise that even though you’re just writing your policy now, a percentage of employees — usually 10% or more but it can only increase — will already be blogging and involved online (using Facebook, Twitter etc.). Whether you have a policy and people are aware of it or not, it’s undeniable that there are many employees in your organisation out there using social media right now. In fact, if you believe the latest statistics on Facebook’s membership, up to one in four people connected to the web have a Facebook account.
Best practice examples of social media and online participation guidelines
Comments should be attributable to employees
The next step: moderating discussion
Once the policy is developed and finalised, the next step is to consider if and how you’re going to moderate all of the discussion you hope to start. Again, there are a number of approaches.
Single sign-on and intranet logins
In 99% of scenarios, comments left by employees should be attributable to the person who wrote it. Without this, and in allowing anonymous comments, you are inviting a disaster.
The best way to ensure contributions are attributed to employees is to use their existing set of login details, and preferably this is part of a single sign-on process.
A good percentage of content management systems and software applications can be configured for single sign-on. It might be a challenge, but one that’s definitely worth the time and effort.
Also ensure that staff cannot log into the discussion areas using generic logins, as one example I read about recently did, with somewhat predictable results.
A further benefit of single sign-on is that it reduces the amount of effort necessary to leave a comment. If you’ve ever been asked to “log in to comment” it can be very frustrating, especially on blogs, less so on forums.
Once you have finalised the mechanism for attributing comments, there are a number of further considerations and options.
Attributing comments to employees
The first decision to make is if, and how, employees will be identified as the author of a comment. The best and strongest option is that if they leave a message on a discussion forum, or comment on a blog, their real name shows up beside the comment.
With the proliferation of online discussion and employees becoming increasingly familiar with contributing on various sites, only in exceptional circumstances should employee comments not be attributed to the person responsible. While anonymous commenting is sometimes preferred on the web, in an organisation staff need to be accountable for what they’re saying.
Cost, time and common fears
There are two main considerations when it comes to moderating:
- The fear from communications teams that with too much moderation you will stifle conversation, and the fear from management that too little moderation (or none) will inevitably result in a hotbed of controversial and/or vitriolic employee exchanges.
- The cost and benefit of having a full time moderator(s) or ‘community managers’ versus the low cost ‘self-regulating’ and light moderation models.
Self-regulating moderation is a good solution for many
Zero or ‘self-regulating’ moderation
Self-regulating moderation is a good solution for many organisations, and is especially effective when comments are attributed to users.
Self-regulating is low-cost, because there are no moderators to speak of. One contact person who is responsible for the uptime of the platform. The platform is governed by the policies drawn up for it, and employees can be asked to ‘agree’ to the stipulations within.
Self-regulation only works when contributions are clearly identified to their authors. With their names clearly attached to their words, employees are far less likely to go beyond the set guidelines — and everyone will see if you do.
However, consideration is needed here, because self regulating takes you down one very clear line of thinking — that of making sure discussions are ‘safe’ for the corporate environment and then leaving the discussion alone to tick along nicely. While it can certainly work, for some organisations this approach may not lead to the desired level of discussion and interaction among employees.
Moderating with a light touch is often seen as essential to an internal discussion platform. You don’t want to be seen to be heavy handed, but management want some kind of safety net. Light moderation may entail just one or two part-time moderators who are directly responsible for attending any flagged comments, carry out intermittent checks on the sites, and generally oversee its development.
Having a moderating team is best practice in an ideal world
Having a dynamic team of moderators is best practice in an ideal world, whether they appear with a seemingly light touch or a strong, development-style presence.
Moderators on the web
Any large web forum (‘large’ meaning a couple of thousand users or more active at any one time) will have moderators. The most vibrant ones have many moderators, their own moderator forum (hidden from public view), fast response processes, arbitration channels, fast and direct escalation of topics and issues etc. They are relatively complex beasts.
But having a moderating team carries inherent costs, and it’s not an easy task to define ROI. Without a clear need or senior level support, it might be difficult to see the benefits of fully managed forum versus a lower-cost alternative.
British Airways, for example, had at least one staff member monitoring the forum 24 hours a day to cope with its dispersed workforce (see Figures 1 and 2). They also had significant senior management support and participation, and a very real need for their employees to connect and discuss. This approach paid off with significant results and uptake of the forum, which also won a Gold Award in the 2008 Intranet Innovation Awards.
If self-regulating communities are about simply providing a discussion/conversation space and making sure those discussions are safe for the corporate environment, then a fully ‘moderated’ forum is more about community development, and you can think of moderators here less as police and peace keepers, and more as fully fledged community managers. It’s an important difference and can lead to very different results.
With guidelines and a moderation strategy in place, you’re in a position to focus directly on the launch of the platform. Far from being a button-pressing exercise, the new platform needs significant attention from the word go.
Leading by example
When the platform launches, team members should be prepared to dive in and encourage and lead discussion, perhaps even making constructive criticisms of their own, or highlighting issues that will get people talking (though at first exercise caution on anything relating to money, salaries or bonuses and so on).
Taking an active approach was the method used at the Department of Families and Communities in South Australia (See figure 3). The ‘Spe@k e-asy’ blog by the department CEO was a live blog, where the participation was encouraged by the author. This example was a commended entry in the 2008 Intranet Innovation Awards.
Ensure employee comments are treated fairly
Treat staff fairly and professionally
When discussion kicks off, being prepared is essential. If an employee does post critical but fair comments of the company — with or without their name attached — they must be treated fairly and their criticisms dealt with professionally.
If a comment or discussion contravenes the guidelines, edit or delete it (or close the thread), post a clear explanation of why the action was taken and refer readers back to the all-important guidelines. If necessary, contact the commenter via a more private channel such as e-mail and discuss with them the reasons for the moderation.
Don’t be heavy handed here either, but be fair, be honest and be aware that such e-mails themselves may be circulated.
Hybrid moderating models
With the approaches mentioned and a strong recommendation for nurturing discussion at launch, one potential strategy is to use a mix of the basic moderation models. For example, start with a fully-managed approach, under the proviso that once people become familiar with the discussion environment, you can reduce the level of attention accordingly, and perhaps in the long term reduce it to a minimum.
What this also means is that you may see the benefits of a fully managed forum and, if you reduce attention later on, you may notice a drop-off in the quality or frequency of discussions. These are tactical options you can be aware of.
As said at the beginning of this article, good, appropriate moderation is critical for the success of a discussion space. Actions and responses in the first few days and weeks can set lofty precedents — good and bad. It’s vital that teams have a good understanding of the options and are prepared for when discussion takes off.