Please use this identifier to cite or link to this item: https://t2-4.bsc.es/jspui/handle/123456789/60480
Full metadata record
DC FieldValueLanguage
dc.creatorJirotka, M, University of Oxforden
dc.date2017-09-15T00:00:00Zen
dc.identifier852794-
dc.identifier10.5255/UKDA-SN-852794-
dc.identifierhttps://doi.org/10.5255/UKDA-SN-852794-
dc.identifier.urihttps://t2-4.bsc.es/jspui/handle/123456789/60480*
dc.descriptionTranscripts of fieldwork interviews with professionals involved in the governance of social media communications. These interviews involve professionals revealing how the organisations they work for deal with various forms of harmful content on social media (rumour, hate speech etc.). Data from an online Delphi panel have also been deposited into the Oxford ORA archive. This Delphi panel sought the opinion of informed experts on the appropriate governance of social media. These data are under embargo until February 2018 and will become available under the T&Cs of the Oxford ORA archive (see Related Resources). The project investigated the spread of harmful content on social media and identified opportunities for the responsible governance of digital social spaces. As a collaborative team of computer scientists, social scientists and ethicists, we investigate the impacts that content such as rumour, hate speech and malicious campaigns can have on individuals, groups and communities and examine social media data to identify forms of ‘self-governance’ through which social media users can manage their own and others’ online behaviours. We also draw on the perspectives other key players such as social media companies, legislators, the police, civil liberties groups and educators to explore ways in which the spread of harmful social media content might be prevented, limited or managed. The project will produce a number of practical outputs including an online social media safety resource and a set of teaching and learning materials for schools and young people. This includes our video animation #TakeCareOfYourDigitalSelf. Project data have been archived barring exclusions. We have not archived social media posts data collected via the Twitter API. This is because the Twitter T&C, and our own project best practice guidelines, restrict archiving to Tweet IDs (rather than the content of tweets) and these were not gathered in our data collection. <p>The rapid growth of social media platforms such as Twitter has had a significant impact on the way people can connect and communicate instantaneously with others. The content that users put onto social media platforms can 'go viral' in minutes and that content, whether text, images or links to other sites, can have profound effects on events as they unfold. This can be both for the good or the bad. In times of disaster, tweeting about events can call people to help from around the globe. But people can also spread dubious and dangerous information, hate speech and rumours, via social media. This type of behaviour has been called "digital wildfires". A World Economic Forum report indicates two situations in which digital wildfires are most dangerous: in situations of high tension, when false information or inaccurately presented imagery can cause damage before it is possible to correct it. The real-world equivalent is shouting "fire!" in a crowded theatre - even if it takes a moment for realisation to spread that there is no fire, in that time people may already have been crushed to death in the scramble for the exit. Another dangerous situation is when widely circulated information leads to 'groupthink' which may be resistant to attempts to correct it. These digital wildfires can seriously challenge the capacity of traditional media, civil society and government to report accurately and respond to events as they unfold. But how people communicate in these digital social spaces is not well understood; users may not fully understand how these spaces 'work' as channels of communication and so what constitutes appropriate and responsible behaviour may be unclear. The challenge then is to develop appropriate ways of governing these spaces and how to apply and use them responsibly. This project will attempt to address this challenge by framing the study in a programme of work known as Responsible Innovation in ICT and by developing a methodology for the study and advancement of the responsible governance of social media. A key question is to what extent do people in these spaces 'self-regulate' their behaviour? If this is evident then there is a case for exploring how self-correction mechanisms may be amplified so that false rumours are identified more quickly. The legitimacy of new governance mechanisms may be enhanced if they respect and build on such existing self-governance techniques. Drawing on a range of methods we will examine how social media are used, how people consume information they find there and what roles they play in its production; how (mis)information flows as they spread in real-time. We will draw on a selection of case studies of rumour and hate speech sourced from our recent and on-going research in social media. From the analyses we will produce a digital tool to detect and visualise rumour, misinformation and antagonistic content and how this relates to self-regulative behaviour such as counter speech, dispelling of rumours and verification practices, so that people are able to make better-informed decisions on how to manage emerging situations in response to real-world events. We will also conduct fieldwork at various sites (police, social media platforms, Google, civil rights organisations, news media) to investigate how stakeholders respond to challenges presented by events where misinformation, rumour and antagonistic content via social media may be a concern, for example, during sporting events, civil disturbance and electoral campaigns. From our analyses the project will develop an ethical security map for the practices of governing the use of social media. We will complement this ethical security map with a range of outputs for broader impact such as, engaging with secondary schools, where we will develop a reflection and training module on digital wildfire for young people - one of the largest age groups actively using social media and also a relatively vulnerable social group.</p>en
dc.languageen-
dc.rightsMarina Denise Anne Jirotka, University of Oxforden
dc.subjectSOCIAL MEDIAen
dc.subjectDELPHI PANEL QUESTIONNAIREen
dc.subjectINTERNET GOVERNANCEen
dc.subjectQUALITATIVE INTERVIEWSen
dc.subject2017en
dc.titleDigital Wildfire project: Delphi panel on the responsible governance of social media and interview data on dealing with harmful social media contenten
dc.typeDataseten
dc.coverageUnited Kingdomen
Appears in Collections:Cessda

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.