Technology

At a glance: Singapore's proposed rules to reduce online harm

[ad_1]

SINGAPORE – New online safety measures that take aim at harmful content on social media platforms are set to be rolled out as early as 2023, with most people supporting the two new codes of practice during a public consultation exercise.

The measures – proposed under the Code of Practice for Online Safety and the Content Code for Social Media Services – aim to keep harmful content away from local users, especially the young. It will also grant the authorities powers to take action against platforms that fail to comply.

The new codes are expected to be added to the Broadcasting Act if they are passed in Parliament.

These are the key points from the proposals by the Ministry of Communications and Information (MCI):

1. Tools to protect young users

Under the proposed rules, social media services will have to provide tools that allow parents and guardians to manage the content that a young user can encounter online and limit any unwanted interactions.

The tools will prevent others from seeing young users’ account profiles, the posts they upload and limit who can interact with their accounts.

MCI proposed that the tools be activated by default for platforms that allow users below the age of 18 to sign up for an account. The platform should also warn young users and parents of the potential risks, should they choose to weaken the settings.

Social media platforms should also provide safety information that is easy for young users to access and understand. It should provide guidance on how young users can be protected from harmful content and any unwanted interaction.

See also  A sophisticated malware has infected nearly 30,000 Macs, including new ones powered by M1

2. Platforms expected to sweep content for online harms

The platforms will be expected to moderate users’ exposure or disable access to these types of content when users report them.

The reporting process should be easy to access and use, and platforms should assess and take action “in a timely and diligent manner”.

Platforms will also be required to proactively detect and remove any content related to child sexual exploitation, abuse and terrorism.

Tools for users to manage their own exposure to unwanted content and interactions should be implemented as well. This will allow users to hide unwanted comments on their feeds and limit their interaction with other users.

Safety information, such as local support centres, should be easily accessible to users. Details of helplines and counselling services should be pushed to users who search for high-risk content, such as those related to self-harm and suicide.

[ad_2]

READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.