One of the big problems is that if social media is causing harm its far from clear what it is about social media that causes harm. Social media is a variety of features assembled in different ways in a series of different products. There are lots of potential candidates for which part could be harmful but they all have different implications for what you need to do about it. A short list could be: increased exposure to photos of themselves, an increased exposure to idealised photos of others, a pervasive online messaging environment making it harder to escape bullying at home, increased exposure to content about self harm or mental illness, more explicit social competition through likes/shares/comments or increased exposure to other content that is stressful. As Stuart Ritchie pointed out in his article not only is the external evidence not that strong but platform's own internal evidence to date hasn't been that great either. That makes it very difficult to put pressure on Facebook to improve the design of their site as its not very clear what they need to improve.
It also makes it very difficult to give good advice to parents about what apps children should avoid. The features that could cause harm exist in varying extents in a wide variety of apps from, YouTube, to Instagram to WhatsApp to even some apps targeted at children like Roblox. The trend is to include more of these features in a greater variety of apps so its harder and harder to say that one set of apps is social media and another set is not.
On what the state can do it about, the UK Government's approach is probably best seen through the Age Appropriate Design code which sets out what platforms should think about for designing services that could be used by under 18s rather than thing targeted at under 18s: https://ico.org.uk/for-organisations/guide-to-data-protection/ico-codes-of-practice/age-appropriate-design-a-code-of-practice-for-online-services/ Broadly its part of a wider movement that platforms need to be designed and regulated for the presence of young people unless they can absolutely guaranteed that they won't be used by them. I'm not sure that's the right approach but it is one worth talking about.
3. We finally have the ===first evidence=== of a direct causal relationship via a very clever US study using the staged rollout of Facebook across US college campuses to assess the impact on mental health.
Might just be my iPad but opens up to new window with blank url
One of the big problems is that if social media is causing harm its far from clear what it is about social media that causes harm. Social media is a variety of features assembled in different ways in a series of different products. There are lots of potential candidates for which part could be harmful but they all have different implications for what you need to do about it. A short list could be: increased exposure to photos of themselves, an increased exposure to idealised photos of others, a pervasive online messaging environment making it harder to escape bullying at home, increased exposure to content about self harm or mental illness, more explicit social competition through likes/shares/comments or increased exposure to other content that is stressful. As Stuart Ritchie pointed out in his article not only is the external evidence not that strong but platform's own internal evidence to date hasn't been that great either. That makes it very difficult to put pressure on Facebook to improve the design of their site as its not very clear what they need to improve.
It also makes it very difficult to give good advice to parents about what apps children should avoid. The features that could cause harm exist in varying extents in a wide variety of apps from, YouTube, to Instagram to WhatsApp to even some apps targeted at children like Roblox. The trend is to include more of these features in a greater variety of apps so its harder and harder to say that one set of apps is social media and another set is not.
On what the state can do it about, the UK Government's approach is probably best seen through the Age Appropriate Design code which sets out what platforms should think about for designing services that could be used by under 18s rather than thing targeted at under 18s: https://ico.org.uk/for-organisations/guide-to-data-protection/ico-codes-of-practice/age-appropriate-design-a-code-of-practice-for-online-services/ Broadly its part of a wider movement that platforms need to be designed and regulated for the presence of young people unless they can absolutely guaranteed that they won't be used by them. I'm not sure that's the right approach but it is one worth talking about.
First evidence link is broken for me
Which sentence? Thanks
3. We finally have the ===first evidence=== of a direct causal relationship via a very clever US study using the staged rollout of Facebook across US college campuses to assess the impact on mental health.
Might just be my iPad but opens up to new window with blank url
Many thanks. Now fixed.