Please use this identifier to cite or link to this item: https://t2-4.bsc.es/jspui/handle/123456789/71502
Title: The 'Call me sexist but' Dataset (CMSB)
Description: This dataset consists of three types of 'short-text' content: <br> <br> 1. social media posts (tweets) <br> 2. psychological survey items, and <br> 3. synthetic adversarial modifications of the former two categories. <br> <br> The tweet data can be further divided into 3 separate datasets based on their source: <br> <br> 1.1 the hostile sexism dataset, <br> 1.2 the benevolent sexism dataset, and <br> 1.3 the callme sexism dataset. <br> <br> 1.1 and 1.2 are pre-existing datasets obtained from Waseem, Z., & Hovy, D. (2016) and Jha, A., & Mamidi, R. (2017) that we re-annotated (see our paper and data statement for further information). The rationale for including these dataset specifically is that they feature a variety of sexist expressions in real conversational (social media) settings. In particular, they feature expressions that range from overtly antagonizing the minority gender through negative stereotypes (1.1) to leveraging positive stereotypes to subtly dismiss it as less-capable and fragile (1.2). <br> <br> The callme sexism dataset (1.3) was collected by us based on the presence of the phrase 'call me sexist but' in tweets. The rationale behind this choice of query was that several Twitter users opine potentially sexist comments and signal so using the presence of this phrase, which arguably serves as a disclaimer for sexist opinions. <br> <br> The survey items (2) pertain to attitudinal surveys that are designed to measure sexist attitudes and gender bias in participants. We provide a detailed account of our selection procedure in our paper. <br> <br> Finally, the adversarial examples are generated by crowdworkers from Amazon Mechanical Turk by making minimal changes to tweets and scale items, in order to change sexist examples to non-sexist ones. We hope that these examples will help us control for typical confounds in non-sexist data (e.g., topic, civility) and lead to datasets with fewer biases, and consequently allow us to train more robust machine learning models. We only asked to turn sexist examples into non-sexist ones, and not vice versa, for ethical reasons. <br> <br> The dataset is annotated to capture cases where text is sexist because of its content (what the speaker believes) or its phrasing (the speaker's choice of words). We explain the rationale for this codebook in our paper cited below.
This dataset consists of three types of 'short-text' content: <br> <br> 1. social media posts (tweets) <br> 2. psychological survey items, and <br> 3. synthetic adversarial modifications of the former two categories. <br> <br> The tweet data can be further divided into 3 separate datasets based on their source: <br> <br> 1.1 the hostile sexism dataset, <br> 1.2 the benevolent sexism dataset, and <br> 1.3 the callme sexism dataset. <br> <br> 1.1 and 1.2 are pre-existing datasets obtained from Waseem, Z., & Hovy, D. (2016) and Jha, A., & Mamidi, R. (2017) that we re-annotated (see our paper and data statement for further information). The rationale for including these dataset specifically is that they feature a variety of sexist expressions in real conversational (social media) settings. In particular, they feature expressions that range from overtly antagonizing the minority gender through negative stereotypes (1.1) to leveraging positive stereotypes to subtly dismiss it as less-capable and fragile (1.2). <br> <br> The callme sexism dataset (1.3) was collected by us based on the presence of the phrase 'call me sexist but' in tweets. The rationale behind this choice of query was that several Twitter users opine potentially sexist comments and signal so using the presence of this phrase, which arguably serves as a disclaimer for sexist opinions. <br> <br> The survey items (2) pertain to attitudinal surveys that are designed to measure sexist attitudes and gender bias in participants. We provide a detailed account of our selection procedure in our paper. <br> <br> Finally, the adversarial examples are generated by crowdworkers from Amazon Mechanical Turk by making minimal changes to tweets and scale items, in order to change sexist examples to non-sexist ones. We hope that these examples will help us control for typical confounds in non-sexist data (e.g., topic, civility) and lead to datasets with fewer biases, and consequently allow us to train more robust machine learning models. We only asked to turn sexist examples into non-sexist ones, and not vice versa, for ethical reasons. <br> <br> The dataset is annotated to capture cases where text is sexist because of its content (what the speaker believes) or its phrasing (the speaker's choice of words). We explain the rationale for this codebook in our paper cited below.
URI: https://t2-4.bsc.es/jspui/handle/123456789/71502
Other Identifiers: 10.7802/2251
https://search.gesis.org/research_data/SDN-10.7802-2251?lang=de
https://search.gesis.org/research_data/SDN-10.7802-2251?lang=en
Appears in Collections:Cessda

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.