Part of the debate – in the Senedd at 6:50 pm on 30 November 2022.
The Online Safety Bill, currently at Report Stage in the House of Commons, delivers the UK Government’s manifesto commitment to make the UK the safest place in the world to be online, while defending free expression. The Bill has been strengthened and clarified since it was published in draft in May 2021, reflecting the outcome of extensive parliamentary scrutiny. So, let me tell you all today what the Bill does. The Bill introduces new rules for firms that host user-generated content, which ultimately allows users to post their own content online or interact with each other, and for search engines, which will have tailored duties focused on minimising the presentation of harmful search results to the user. The platforms that fail to protect people will need to answer to the regulator and could face fines of up to 10 per cent of their revenues or, in the most serious cases, being blocked.
It was great to hear about some of the work being carried out by Twitter and TikTok to keep people safe online when I met with the social media giants recently at their headquarters. Twitter, for example, allows users to mute certain words, phrases, emojis and hashtags, and also to control who can reply to tweets. Earlier this year, the company launched the Twitter Circle experiment. Users choose who is in their Twitter Circle and only the individuals you’ve added can reply and interact with the tweets that you share. These are just some of the safety tools Twitter users can access. Companies like Twitter take online safety incredibly seriously, and they remain committed to investing in the moderation of illegal or harmful content as they endeavour to provide a service that is safe and informative for all. From July to December 2021, Twitter removed 4 million tweets that violated their rules, and, of the tweets removed, 71 per cent received fewer than 100 impressions prior to removal, with an additional 21 per cent receiving between 100 and 1,000 impressions. Only 8 per cent of removed Tweets had more than 1,000 impressions.
For those who like numbers and statistics, like me, in total, impressions on these violative tweets accounted for less than 0.01 per cent of all impressions for all tweets during the time in this period. All platforms in scope will need to tackle and remove illegal material online, particularly material relating to terrorism and child sexual exploitation and abuse as well. Platforms likely to be accessed by children will also have an enormous duty to protect young children using their services from legal but harmful material such as self-harm content around eating disorders. TikTok has taken a safety-by-design approach to preventing harm online, which, I must admit, is truly commendable. The company has made a number of changes for users aged under 18, such as designing its settings to be private by default. For example, users aged 13 to 15 are given private accounts by default, meaning their videos can only be watched by people they approve as followers. TikTok also has appropriate age features, which restricts what can be sent via private messaging, has age checks and assurances. It also allows parents and caregivers to link their TikTok account with their teen’s and customise various safety settings.
Between April and June this year, it is worth noting that TikTok removed more than 113 million videos—roughly 1 per cent of the content that was uploaded to TikTok—for violating its community guidelines. Of these videos, it’s worth mentioning that 95.9 per cent of this content was removed proactively by TikTok before it was reported by a user, 90.5 per cent of content was removed before it had received a single view, and 93.7 per cent of content was removed within 24 hours.
Additionally, providers who publish or place pornographic content on their services will be required to prevent children from accessing that content. The largest, highest risk platforms will have to address named categories of legal but harmful material accessed by adults likely to include issues such as abuse, harassment or exposure to content encouraging self-harm or eating disorders. They will need to make clear in their terms and conditions what is and is not acceptable on their site, and enforce this, and enforce it properly. These services will have a duty to bring in user empowerment tools, giving adult users more control over whom they interact with and the legal content they see, as well as the option to verify their identity.
We all love and appreciate freedom of expression, and it will be protected, because these laws are not about imposing excessive regulation or state removal of content, but ensuring that companies have the systems and processes in place to ensure users’ safety. For anyone here who thinks the Bill is weak or watered down, let me assure you that it offers a triple shield of protection, so it's certainly not weaker in any sense. The triple shield requires platforms to, firstly, remove illegal content, secondly, remove material that violates their terms and conditions, giving users controls to help them avoid seeing certain types of content to be specified by the Bill, and also remove material that violates their terms and conditions. This could also include content promoting eating disorders or inciting hate on the basis of race, ethnicity, sexual orientation or even gender reassignment. The Center for Countering Digital Hate chief executive Imran Ahmed actually added that it was welcome the Government
'had strengthened the law against encouragement of self-harm and distribution of intimate images without consent'.
Much of the enforcement of the new law will be by communications and media regulator Ofcom, who we often hear about regarding tv and other online provision, which will be able to fine companies—which I mentioned earlier—up to 10 per cent of their worldwide revenue, which does accumulate into the billions. The actual rules themselves must now consult the victims' commissioner, the domestic abuse commissioner and the children's commissioner when drawing up the codes technology companies must follow moving forward. Proportionate measures will avoid unnecessary burdens on small and low-risk business. Finally, the largest platforms will need to put in place proportionate systems and processes to prevent fraudulent adverts being published or hosted on their service. This will ultimately tackle the harmful scam adverts that have been having a devastating effect on their victims, regardless of their age and background.
I know concerns have been raised about perceived delays in the progress of this Bill through Parliament, and I welcome the assurance given by a spokesperson at the Department for Digital, Culture, Media and Sport, and I quote:
'Protecting children and stamping out illegal activity online is a top priority for the government and we will bring the Online Safety Bill back to Parliament as soon as possible.'
So, I hope the Bill passes its remaining stages as soon as is practical. When this is achieved, we in the Senedd should focus on what we can do to ensure the new regulatory regime is implemented in a way that prevents, protects, supports, and advocates the rights of children in the online world right here in Wales. Passing the legislation will be a significant milestone. However, let’s be real—no online safety Bill can remove all threats and issues from the lives of children. Five years after the introduction of the first Welsh Government action plan on online safety, it’s time we look, take stock and define Wales’s role in the new regulatory regime. It's crucial that children’s voices should be at the heart of shaping Wales’s role post legislation. Every effort should be made to engage children and young people, to hear their concerns, but also to find solutions for how we can make the online world a safer place for them all.
I would like to ask the Welsh Government to set up an inquiry into child online safety to audit what exact gaps are remaining to realise children’s right to be safer online. Areas for consideration by this committee could and should include, firstly, how we ensure the new relationship and sexuality education curriculum supports and realises children and young people’s right to be safe and protected online. Secondly, it should examine what additional training should be rolled out for professionals who work with children and young people. Thirdly, it should also scrutinise the Welsh Government’s enhancing digital resilience in education action plan, the forthcoming peer-on-peer sexual harassment action plan and any successor to the action plan on preventing and responding to child sexual abuse. This would ensure that they speak to one another, put politics aside and deliver an approach that protects children and young people, and enables them to speak out, seek and receive the support they need, as protecting youth, once again, for us is paramount. And finally, it could consider the risks and adequacy of responses relating to online communication through the Welsh language. Exploring children’s needs and experiences in this space would ensure that all children are receiving equality of protection.
Deputy Presiding Officer, the UK Government’s Online Safety Bill should not be an end but a means to an end. The Welsh Government must build on the Bill to tackle the drivers of online harm. The onus must not solely be on the child to be resilient or to keep themselves safe online; the Welsh Government must fulfil its duty to Welsh children under the UNCRC and ensure it responds to the unprecedented levels of grooming and child sexual abuse that we are currently seeing online every day. Thank you.