Hate speech has become an open conversation on social media with users hiding under anonymity to push out vile comments.
The
government and tech companies have been left stumbling over themselves
to control the messages which may spark violence as the country heads to
the General Election in less than a month.
Less than a
fortnight ago, new guidelines on social media use were drafted, with a
proposed fine of Sh1 million or a jail term of five years for Kenyans
sharing inflammatory content via Facebook, Twitter and WhatsApp this
election season.
However, even with the guidelines by
social networking sites and the local National Cohesion and Integration
Act individuals still use loopholes in the system to propagate
inflammatory messages.
For example, Facebook employs
content moderators for its pages, however, with over 2,000 languages and
dialects on the site, it is not possible for all hateful content to be
flagged.
“The proportion of users on Facebook compared
to those who report content that should not be on Facebook is very
small,” says Ebele Okobi, Public Policy Director Africa, Facebook. In
addition, some users put up fake accounts to post threats and
inflammatory comment, hiding behind anonymity to get off without being
prosecuted.
“Hate speech for Facebook is content that
attacks individuals based on actual or perceived race, ethnicity,
national origin, religion, sex, gender or gender identity, sexual
orientation, disability or disease,” said Ms Okobi.
This
creates the need for users to play their role in prevention and
stopping hate crimes on social networking and messaging sites.
“There
are many things that people feel that the government is responsible
for. I have people calling me to have stuff pulled down,” said ICT
Cabinet Secretary Joe Mucheru.
According to Mr Mucheru,
social networking sites such as Facebook and YouTube have a
responsibility for the content on their pages.
Germany
passed a controversial law which fines social media companies up to
Sh5.9 billion (€50 million) for failing to pull down hate speech.
The
sites are also required to submit public reports on how many posts were
flagged and how many were removed under the law that comes into force
in October.
In Kenya, there is no law on the responsibility of a site on content posted on it, only individuals are targeted.
Early
last year, a video flagged by the Kenya Film Classification Board
(KFCB) as inappropriate for promoting gay relationships sparked a tussle
between the board and Google. According to the KFCB chair, the board
submitted a legal request to have the video pulled down. YouTube added a
disclaimer on the video indicating that it had been flagged as
inappropriate to some audiences but did not take it down.
Some
users have experienced similar challenges on other sites where content
is flagged as inappropriate with no other action taken. According to Ms
Okobi, the guidelines on hate speech and inappropriate content are
global.
This aspect cuts across most sites, meaning
that content that may be generally accepted across the world but offends
one or two outlier countries may not be taken down as the guidelines do
not take into consideration individual cultural and country practices.
Most
of these sites require you to contact the user who posted the content
to take it down of flag the content and report it under the reasons
listed to have the company pull it down.
Further, an
aggrieved individual in Kenya is required to report the issue to the
Directorate of Criminal Investigations for an investigation to be
launched and a case filed.
“Repeat offenders on the
site have accounts deactivated or are placed under check points (where
they cannot access their accounts for a period of time) or in very
severe instances, they are permanently removed from the site,” said Ms
Okobi.
No comments :
Post a Comment