Our robust social media check
gives a comprehensive insight into the online reputation of an individual, identifying both positive and negative aspects of their online activity.
What we will look for
- Full digital footprint search: a full online search will be carried out to find multiple social media accounts used by the candidate, including those used under aliases
- International searches: translations into English from any language
- Positive posts: now you can see voluntary and/or charity activities a candidate is involved with that may align with your values
Using basic information such as an individual’s name, email, mobile phone number or CV, we will explore their digital footprint across multiple languages to identify multiple social media accounts and aliases, should they exist.
While content is initially scoured by the latest Artificial Intelligence (AI) technology, a human will double check each report to make sure any content flagged is directly linked to the individual being screened.
The algorithms used by this AI are also continuously adjusted, so they are always learning how day-to-day language is used.
This allows the system to avoid taking innocent words out of context and giving them a red traffic light where none is needed.
We will search for content that falls into the following definitions.
|Positive Risk Indicators
||Evidence of the individual raising funds for a charitable cause
||Undertaking a challenge and actively seeking donations to a charity
||Evidence of the individual working as a volunteer
||A role listed by the individual on social media or images of the subject volunteering
Easy to read reports
|Negative Risk Indicators
||Content that may be reputationally damaging or displays evidence of unprofessionalism
||The use of profanities directed towards another individual. The use of extreme profanities in any context. Shared images, videos and content which involve professionally inappropriate humour.
||Content which is indicative of maintaining extreme views or opinions.
||Posting content which suggests membership or support of extremist groups or terrorist groups.
||Content which shows the individual to be involved in illegal activities
||Posting content suggesting involvement in activities including but not limited to financial crime, computer crime (e.g. hacking), drug dealing. Content which shows the subject to have been arrested, imprisoned or trialled in court.
|Sexually explicit content
||Content that depicts or describes nudity and/or sexually explicit material
||Discussing or sharing images/videos of sexual acts in any context. Photos or videos of nudity. Discussion of sexual organs in a non-factual context.
|Hate & discriminatory behaviour
||Content which uses hateful language, slurs or abuse targeted at specific groups of people
|The use of racist, sexist or homophobic words in any non-factual context. Abusive content targeted at an individual or group due to a specific attribute.
|Potential addiction or substance abuse
||Content which indicates addiction or dependence on a substance or activity
||Posts, images or videos of the subject consuming drugs. Use of language which suggests the subject has an addiction, is unable to quit an addiction or is struggling with over-interaction with a substance or activity.
||Content which displays violence, recounts violence perpetrated by the individual or incites violence.
||Sharing graphic photos/videos of death, war or violence which may be disturbing to other users. Photos/videos of the subject displaying violent behaviour/weapon use or posts inciting violence against other individuals/groups.
||Any material that needs to be red flagged that does not sit naturally in one of the categories above.
Our social media check
reports include both negative and positive flags to help assess the reputation and cultural fit of each individual with your organisation. It will include screen captures of any social media profiles we identify as belonging to the individual, as well as any posts that are deemed relevant against one of the content definitions above.
The reports detail any content that was found against any of the definitions and provide one of the following three flags based on the results:
- Red – some content was found against a negative definition
- Green – no content was found against a negative definition or some content was found against a positive definition
- Grey – no content was found against a positive definition
In the event that we are unable to trace any social media profiles related to the individual, we will assign the report a green flag due to the absence of red flag content but will follow this up with an email to the client separately flagging the risk of false information provided.
Our social media check report only includes user generated content from the individual and will not extend to any third-party content such as news articles.
Reports are issued in English but may contain data sourced from anywhere globally. The report will only contain content connected to the individual and so will not include any false positive information.
Aside from keeping your brand protected from the online activity of new employees, social media activity is now being recognised by regulators such as the BSI and FCA as an important aspect when checking suitability of individuals to regulated roles.
As part of BS7858, British Standards Institution (BSI) have issued advice
on social media and online activity screening for employees in a secure environment.
The Financial Conduct Authority has also cited in their handbook
, pre-employment social media as an example of ‘good practice’ for organisations regulated by them.
Our social media checks remain fully compliant. Information outside of the content definitions above such as protected characteristics, or social activity that is not tagged as a risk, will not be included. They also won’t contain any aspects protected by the Equality Act 2010
, such as political views or religious beliefs.