NEWS
Twitter rewrites Developer Policy to better support academic research and use of ‘good’ bots
Twitter today updated its Developer Policy to clarify rules around data usage, including in academic research, as well as its position on bots, among other things. The policy has also been entirely rewritten in an effort to simplify the language used and make it more conversational, Twitter says. The new policy has been shortened from eight sections to four, and the accompanying Twitter Developer Agreement has been updated to align with the Policy changes, as well.
One of the more notable updates to the new policy is a change to the rules to better support non-commercial research.
Twitter data is used to study topics like spam, abuse and other areas related to conversation health, the company noted, and it wants these efforts to continue. The revised policy now allows the use of the Twitter API for academic research purposes. In addition, Twitter is simplifying its rules around the redistribution of Twitter data to aid researchers. Now, researchers will be able to share an unlimited number of Tweet IDs and/or User IDs, if they’re doing so on behalf of an academic institution and for the sole purpose of non-commercial research, such as peer review, says Twitter.
The company is also revising rules to clarify how developers are to proceed when the use cases for Twitter data change. In the new policy, developers are informed that they must notify the company of any “substantive” modification to their use case and receive approval before using Twitter content for that purpose. Not doing so will result in suspension and termination of their API and data access, Twitter warns.
The policy additionally outlines when and where “off-Twitter matching” is permitted, meaning when a Twitter account is being associated with a profile built using other data. Either the developer will need to obtain opt-in consent from the user in question, or they can only proceed if the information was provided by the person or is based on publicly available data.
The above changes are focused on ensuring Twitter data is accessible when being used for something of merit, like academic research, and that it’s protected from more questionable use cases.
Finally, the revamped policy clarifies that not all bots are bad. Some even enhance the Twitter experience, the company says, or provide useful information. As examples of good bots, Twitter pointed to the fun account @everycolorbot and informative @earthquakesSF.
Twitter identifies a bot as any account where behaviors like “creating, publishing, and interacting with Tweets or Direct Messages are automated in some way through our API.”
Going forward, developers must specify if they’re operating a bot account, what the account is, and who is behind it. This way, explains Twitter, “it’s easier for everyone on Twitter to know what’s a bot – and what’s not.”
Of course, those operating bots for more nefarious purposes — like spreading propaganda or disinformation — will likely just ignore this policy and hope not to be found out. This particular change follows the recent finding that a quarter of all tweets about climate change were coming from bots posting messages of climate change denialism. In addition, it was recently discovered that Trump supporters and QAnon conspiracists were using an app called Power10 to turn their Twitter accounts into bots.
Twitter says since it introduced a new developer review process in July 2018, it has reviewed over a million developer applications and approved 75%. It also suspended more than 144,000 apps from bad actors in the last six months and revamped its developer application to be easier to use. It’s now working on the next generation of the Twitter API and is continuing to explore new products, including through its testing program, Twitter Developer Labs.