US standards body says ByteDance researcher wrongly added to AI safety groupchat

TikTok offices shown in California after U.S. Congress passes bill to divest in Chinese owner
A person arrives at the offices of TikTok after the U.S. House of Representatives overwhelmingly passed a bill that would give TikTok's Chinese owner ByteDance about six months to divest the U.S. assets of the short-video app or face a ban, in Culver City, California, U.S., March 13, 2024. REUTERS/Mike Blake/File Photo Purchase Licensing Rights, opens new tab
WASHINGTON, March 18 (Reuters) - A researcher from TikTok's Chinese owner ByteDance was wrongly added to a groupchat for American artificial intelligence safety experts last week, the U.S. National Institute of Standards and Technology (NIST) said Monday.
The researcher was added to a Slack instance for discussions between members of NIST's U.S. Artificial Intelligence Safety Institute Consortium, according to a person familiar with the matter.
In an email, NIST said the researcher was added by a member of the consortium as a volunteer.
"Once NIST became aware that the individual was an employee of ByteDance, they were swiftly removed for violating the consortium's code of conduct on misrepresentation," the email said.
The researcher, whose LinkedIn profile says she is based in California, did not return messages; ByteDance did not respond to emails seeking comment.
The person familiar with the matter said the appearance of a ByteDance researcher raised eyebrows in the consortium because the company is not a member and TikTok is at the center of a national debate over whether the popular app has opened a backdoor for the Chinese government to spy on, or manipulate Americans at scale. Last week, the U.S. House of Representatives passed a bill to force ByteDance to divest itself of TikTok or face a nationwide ban; the ultimatum faces an uncertain path in the Senate.
The AI Safety Institute is intended to evaluate the risks of cutting edge artificial intelligence programs. Announced last year, the institute was set up under NIST and the founding members of its consortium include hundreds of major American tech companies, universities, AI startups, nongovernmental organizations and others, including Reuters' parent company Thomson Reuters.
Among other things, the consortium works to develop guidelines for the safe deployment of AI programs and to help AI researchers find and fix security vulnerabilities in their models. NIST said the Slack instance for the consortium includes about 850 users.
(This story has been refiled to add the dropped word 'Consortium' to the name of the AI body in paragraph 2)

Sign up here.

Reporting by Raphael Satter; Editing by Sharon Singleton

Our Standards: The Thomson Reuters Trust Principles., opens new tab

Purchase Licensing Rights

Thomson Reuters

Reporter covering cybersecurity, surveillance, and disinformation for Reuters. Work has included investigations into state-sponsored espionage, deepfake-driven propaganda, and mercenary hacking.