Natalie Weaver Asked Twitter to Change Its Reporting Policies After an Offensive Image of Her Daughter Was Shared
Article updated March 6, 2020.
Two years after disability advocate and mom Natalie Weaver advocated for Twitter to update its reporting policies, the social media platform has finally formalized new rules against hateful conduct toward people with disabilities and other health conditions.
In 2018, Weaver was sent a tweet that used her daughter’s image to promote abortion. Her late daughter Sophia, who died May 24, 2019, at the age of 10, had Rett syndrome and facial differences.
When Weaver saw the photo, her heart dropped, she told The Mighty. Though she blocked the account using Sophia’s photo, it was brought to her attention that the photo was still being circulated and was the pinned tweet on the account’s profile.
“I decided enough was enough,” Weaver told The Mighty. “He is not using my daughter’s image in that hateful way.”
So Weaver and others began reporting the tweet to Twitter. Three days later, Twitter responded, saying that the tweet did not violate its rules and regulations. Then, after reviewing the tweet again, Twitter suspended the person who posted the photo and apologized for its error.
Thank you @TwitterSupport & @jack for listening! The account that was using my daughter’s image has been suspended! Thank you to the thousands of people who reported this & supported us! Thank you for taking a stand against hate! pic.twitter.com/77jPFljYPs
— Natalie Weaver (@Nataliew1020) January 22, 2018
Weaver’s fight wasn’t over. Julie Ross, a fellow disability advocate, tweeted at Weaver that Twitter does not include hate against people with disabilities in the form users fill out when reporting a tweet.
While hateful content that targets people with disabilities was not included in Twitter’s reporting form, it is included in the social media platform’s hate conduct policy. Both Weaver and The Mighty reached out to Twitter about the discrepancy between its policy and how offensive content is reported but did not hear back at the time.
In April 2018, Twitter announced it had reconciled the discrepancy and added a disability-specific reporting option to its reporting form. Weaver applauded the social media company’s move.
“Many people with medical conditions and/or disabilities receive hate and harassment every day on Twitter and no violations are found,” Weaver told The Mighty. “I am hopeful that this change will create a safer environment for people with medical conditions and disabilities. I hope that Twitter support will be more consistent now in finding violations. I will continue my work to ensure that Twitter support follows through on this.”
It's against our rules to directly attack or threaten someone based on their protected category, including disability. You asked us to clarify this in our reporting flow, and we've updated it to be more specific. https://t.co/h7PrSjPkC2 pic.twitter.com/WXDmyGwwiT
— Twitter Safety (@TwitterSafety) April 2, 2018
Weaver has long been an outspoken advocate for social media platforms to update their reporting policies to include people with disabilities. In addition to pushing for change with Twitter, Weaver took Instagram to task in April 2019 to include people with disabilities in its reporting form. Other platforms, like TikTok and YouTube, have also come under fire for questionable policies to prevent disability-related hate speech that discriminate against disabled people as opposed to removing bad actors for the platforms.
Two years after Weaver’s initial advocacy efforts, Twitter took its commitment to protect people with disabilities on the platform one step further. Now, tweets containing hate speech based on age, disability and other health conditions will be removed from the platform, a strengthening of its former policies. In a March 5, 2020, blog post, Twitter said the change is based on the understanding that what’s said online also has a real-world impact.
“Our primary focus is on addressing the risks of offline harm, and research* shows that dehumanizing language increases that risk,” Twitter wrote in the blog post. “Today, we are further expanding this rule to include language that dehumanizes on the basis of age, disability or disease.”
Photo courtesy of Natalie Weaver