As the Air Force works to modernize its cybersecurity practices, it wants to deploy artificial intelligence to better support its networks’ defensive measures so airmen get more time to do white-hat hacking and other higher-level tasks.
The plan is to automate and leverage algorithms that can learn from data to fix small issues, especially those that can easily be missed by a human. Fixing bugs and sifting through reams of threat reports are assignments better left to AI instead of overloaded human workforces, Lauren Knausenberger, chief innovation officer for the Air Force, said Wednesday during CrowdStrike’s Fal.Con for Public Sector Conference, produced by FedScoop and CyberScoop.
“We are just going to miss things the more we do manually,” Knausenberger said.
The Air Force would prefer its human cyber workforce to be “creative in an evil way” by spending more time simulating real-world hacks on the service’s technology.
She envisions human analysts being able to “team” with AI systems to identify where human brain power can apply the type of complex problem solving-skills that computers can’t apply on their own.
Using AI-enabled systems to recognize the most important cyberthreat information and then alert airmen is the goal, Knausenberger said.
“Until you get that right … you will be drowning” in reams of data, she said.
Another reason for the Air Force to promote AI: The cybersecurity workforce shortage has been acute in the government. Employers like the DOD often lag behind the private sector in benefits and pay. Adding AI-powered systems to the defensive workforce ultimately will lead to stronger security, Knausenberger said.
There are also ways to attract outside help, too. The Air Force has recently expanding its bug bounty programs, including new “Hack-a-Sat” events for ethical hackers to target satellites. The department even paid out $150,000 to a teenager in recent bug bounties, and the computer whiz was later offered a job in the Defense Department, Knausenberger said.