<img height="1" width="1" alt="FB pixel" style="display:none" src="https://www.facebook.com/tr?ev=6024262178418&amp;cd[value]=0.01&amp;cd[currency]=CAD&amp;noscript=1">

US defense bill requires comprehensive deepfake weaponization, countermeasures initiative

Deepfake detection tech would by stimulated by bill requiring NSF to hold prize competitions
US defense bill requires comprehensive deepfake weaponization, countermeasures initiative
 

On December 20, President Donald Trump signed into the law the historic $738 billion National Defense Authorization Act (NDAA) for Fiscal Year 2020, which contains a provision that requires the “establishment of deepfakes prize competition” to foster research on deepfake detection technologies, in addition to other comprehensive provisions considering the threat deepfakes pose to national security.

Ironically, the day before, Rep. Jennifer Wexton (D-VA) introduced legislation that would require the director of the National Science Foundation (NSF) to establish prize competitions to incentivize new research into the development of innovative new technologies to detect deepfakes was introduced last week

Wexton’s bill, HR 5532, was referred to the House Committee on Science, Space, and Technology.

“Deepfakes pose a serious threat to our national security, and there are significant challenges in our ability to effectively identify this manipulated content,” Wexton said in a statement announcing her legislation, which as of this reporting has no co-sponsors.

Wexton explained that by “establishing prize competitions in this critical field of research will help spur greater innovation and research into technologies that can detect deepfakes. With this bill, we will expand the tools available to address this growing threat to our democracy.”

The legislator’s office stated that “prize competitions at NSF have helped spur further research on important and emerging topics like data science, engineering, and astrophysics,” so the prize competition Wexton wants to see the NSF engage in is no different, declaring that “the rapidly developing nature of artificial intelligence and machine learning has created a demand for new thinking and new technology to keep up with this threat,” and that “the United States General Services Administration estimates that since 2010, federal agencies have conducted more than 840 prize competitions and offered more than $280 million in prize money.”

In the meantime, Section 5724 of the NDAA creates a deepfakes competition to be managed by the Director of National Intelligence [DNI] to award up to $5 million to one or more winners in a competitive initiative to stimulate the research, development, or commercialization of technologies to automatically detect machine-manipulated media.”

The NDAA further requires that within six months the DNI, “in consultation with the heads of the elements of the Intelligence Community [as] determined appropriate by the [DNI], shall submit to both the Senate and House intelligence committees a Report on Foreign Weaponization of Deepfakes and Deepfake Technology detailing “the potential national security impacts of machine-manipulated media” and “deepfake technology, foreign weaponization of deepfakes, and related notifications,” in particular notifications to appropriate congressional leaders when hostile foreign actors are engaged in deepfake … propaganda operations directed at influencing US elections by “to spread[ing] disinformation or engag[ing] in other malign activities.”

The defense authorization bill also mandates the DNI report “on [the] use by [the] Intelligence Community of facial recognition technology”

“An assessment” is also required by the NDAA “of the technical capabilities of foreign governments, including foreign intelligence services, foreign government-affiliated entities, and foreign individuals, with respect to machine-manipulated media, machine-generated text, generative adversarial networks, and related machine-learning technologies,” including “an assessment of the technical capabilities of the People’s Republic of China and the Russian Federation with respect to the production and detection of machine-manipulated media; and an annex describing those governmental elements within China and Russia known to have supported or facilitated machine-manipulated media research, development, or dissemination, as well as any civil-military fusion, private-sector, academic, or nongovernmental entities which have meaningfully participated in such activities.”

But there’s more, the NDAA also requires “an updated assessment of how foreign governments, including foreign intelligence services, foreign government-affiliated entities, and foreign individuals, could use or are using machine-manipulated media and machine-generated text to harm the national security interests of the United States, including an assessment of the historic, current, or potential future efforts of China and Russia to use machine-manipulated media, including with respect to … overseas or domestic dissemination of misinformation; the attempted discrediting of political opponents or disfavored populations; and intelligence or influence operations directed against the United States, allies, or partners of the United States, or other jurisdictions believed to be subject to Chinese or Russian interference.”

An updated identification of counter-technologies that have been, or could be, developed and deployed by the US government or the private sector with government support to deter, detect, and attribute the use of machine-manipulated media and machine-generated text by foreign governments, foreign-government affiliates, or foreign individuals, along with an analysis of the benefits, limitations and drawbacks of such identified counter-technologies, including any emerging concerns related to privacy, is also mandated by the NDAA.

The new defense bill additionally requires that “identification of the offices within the elements of the Intelligence Community [be made] that have, or should have, lead responsibility for monitoring the development of, use of, and response to machine-manipulated media and machine-generated text, including:

• A description of the coordination of such efforts across the intelligence community;
• A detailed description of the existing capabilities, tools, and relevant expertise of such element to determine whether a piece of media has been machine manipulated or machine generated, including the speed at which such determination can be made, the confidence level of the element in the ability to make such a determination accurately, and how increasing volume and improved quality of machine-manipulated media or machine-generated text may negatively impact such capabilities;
• A detailed description of planned or ongoing research and development efforts intended to improve the ability of the intelligence community to detect machine-manipulated media and machine-generated text;
• A description of any research and development activities carried out or under consideration to be carried out by the Intelligence Community, including the Intelligence Advanced Research Projects Activity, relevant to machine-manipulated media and machine-generated text detection technologies;
• Updated recommendations regarding whether the Intelligence Community requires additional legal authorities, financial resources, or specialized personnel to address the national security threat posed by machine-manipulated media and machine-generated text; and
• Other additional information the DNI determines appropriate.

Most importantly, however, the DNI, “in cooperation with the heads of any other relevant departments or agencies of the federal government, shall notify the congressional Intelligence Committees each time the [DNI] determines there is credible information or intelligence that a foreign entity has attempted, is attempting, or will attempt to deploy machine-manipulated media or machine-generated text aimed at the elections or domestic political processes of the United States; and, that such intrusion or campaign can be attributed to a foreign government, a foreign government-affiliated entity, or a foreign individual.”

Wexton said she “introduced [her] legislation [to] establish a prize competition at @NSF to help spur greater innovation and research into technologies that can detect deepfakes,” emphasizing that “this [bill] will expand the tools available to address this growing threat to our democracy.”

“In May, an online blogger published a doctored video of House Speaker Nancy Pelosi that depicted her appearing to slur her words during a press conference. This video was shared widely on Facebook and Twitter, including by the President himself. Before it had been debunked, the video had been viewed millions of times,” Wexton tweeted.

Later, during the rather fiery October 23 hearing, An Examination of Facebook and Its Impact on the Financial Services and Housing Sectors, before the House Committee on Financial Services, Wexton entreated Facebook CEO Mark Zuckerberg – as did other committee members — virtually roasted him over Facebook’s failure to prevent the distribution of counterfeit news.

Wexler scolded Zuckerberg regarding what she implied was the company’s unwillingness to temperate political deepfakes, chiefly Facebook’s decision to not delete the widely-circulated deepfake of comments House Speaker Nancy Pelosi had made which were doctored to slur her words – giving viewers the impression she was drunk at the time. The video went viral.

Zuckerberg conceded to Wexler that it had been an “operational mistake” by Facebook that resulted in the company failure to “fact check” the deepfake Pelosi video, but also admitted that
the he played a role in his firm’s decision not to remove the video, citing the decision followed company policy – admitting, however, that Facebook must develop a specific policy to deal with deepfakes, which he assured the company is doing.

Nevertheless, Wexton pressed on: “Do you understand there’s a difference between misinformation and disinformation?”

“Yes,” Zuckerberg replied, saying, however, that “it’s not that it’s not our responsibility or that it’s not good to take that into account. It’s just that it’s much harder to determine intent at scale.”

Wexton tweeted in November that, “Mark Zuckerberg tried to avoid answering whether he personally refused to take down a widely-circulated deepfake of @SpeakerPelosi—he did.
With federal elections looming, @facebook has no plan to safeguard our democracy from the threat of deepfakes.”

Wexton earlier authored an amendment to HR 4355, The Identifying Outputs of Generative Adversarial Networks (IOGAN) Act during the House Committee on Science, Space, and Technology Committee markup of the bill, which passed with bipartisan support and referred on December 10 to the Senate Committee on Commerce, Science, and Transportation where it awaits action.

Sen. Catherine Cortez Masto (D-NV) introduced the companion Senate bill, S 2904, the IOGAN Act, on November 20.

According to the House committee report which accompanied the House passed bill over to the Senate, “Specifically, the NSF must support research on the science and ethics of material produced by generative adversarial networks and [the National Institute of Technology] must support research to accelerate the development of tools to examine the function and outputs of generative adversarial networks.”

The IOGAN Act supports research to close existing gaps in the technology to identify outputs of generative adversarial networks (GANs). Wexton’s amendment would direct NSF to conduct research on the public understanding and awareness of deepfake videos as well as best practices for educating the public on how to spot these operations and discern the authenticity of content.

Wexton declared at the time that, “Deepfakes pose a grave threat to our national security, and we are woefully unprepared to counter the impacts they can inflict on every aspect of our society. The more we can improve public awareness and understanding of how manipulated content is created and shared, the better we can strengthen and safeguard our democracy in upcoming elections.”

She said, “Deepfake videos and images have become a growing problem due to GANs,” and that “this technology creates a feedback loop to produce increasingly accurate media outputs that portray highly realistic, but manipulated, content.”

“Bad actors,” she stressed, “have already begun to use deepfake videos to create a false perception of political figures on social media.

While other deepfake – and related biometrics legislation – remains stalled in Congress, as Biometric Update has reported, the NDAA finally provides teeth to what have so far been rather languid efforts to combat the deepfake problem, especially as the 2020 election kicks into high gear – an security concern which in recent months – and even weeks and days – has been getting much more serious attention.

Indeed. Wexton said in September that, “Artificial intelligence experts have indicated that significant challenges remain to effectively neutralize the threat of deepfakes. Raising public awareness of what deepfakes are, how they work, and how to identify them, has been pointed to as a crucial component of efforts to combat deepfakes.”

Earlier in December, the University of Washington and the Center for an Informed Public launched a program in part sown by $5 million from the John S. and James L. Knight Foundation designed to understand and combat digital fabrication and deception via social media as part of a larger, broader $50 million in grants to 11 American universities and research institutions to conduct analyses of how these technologies can endanger democracy – like elections.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Worldcoin eyes big name partnerships, runs low on biometric Orbs

Undeterred by increasing regulatory crackdowns across the globe, Worldcoin is hoping to seal more partnerships with tech and finance companies…

 

Smart Engines says new method boosts neural network efficiency by 40%

Scientists from the computer vision software company Smart Engines have announced they have found a way to improve the efficiency…

 

Airport biometrics bans proposed in US state and federal legislation

Biometric access control and airport security screening company Clear is facing legislative pushback in California from lawmakers who say that…

 

Governance framework key to trust that enables data-driven policymaking

The UNDP publication Development Advocate has published a new article by Tariq Malik, former chairman of Pakistan’s National Database and…

 

Fusion Technology wins $159.8M contract for FBI’s CJIS

Fusion Technology (Fusion), an IT provider for U.S. government services, has been awarded a $159.8 million five-year contract with the…

 

French data regulator sees 35% more privacy complaints

The European Union’s AI Act, age verification and keeping biometrics out of surveillance during the Paris Olympics were among the…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read From This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events