Microsoft’s Vasu Jakkal: AI will transform cybersecurity - and tackle the cyber skills shortage
Cybersecurity is getting the generative AI treatment, with Microsoft’s new Security Copilot service incorporating the same artificial intelligence technology that has taken the world by storm.
Security Copilot combines OpenAI’s GPT-4 generative AI with a security-specific model Microsoft has created. It allows cybersecurity professionals to enter prompts to find out about the status of a network’s security, and to ask advice about steps to take when a cyber attack is in progress.
Copilot works with Microsoft Defender and Sentinel, the software giant’s flagship security products, but via APIs (application programming interfaces) can also interact with other security tools an organisation is deploying.
View a demo of Microsoft Cybersecurity Copilot here.
Vasu Jakkal, Microsoft’s Corporate Vice President for Security, Compliance, Identity and Privacy, says AI is part of the answer to the escalating wave of cyberattacks - and the dearth of skilled cybersecurity analysts forming the front line against them.
Defenders are not winning
“We realize actually we actually need to change the entire game itself because we are not winning in this game. Like, the defenders are not winning,” Jakkal told Tech Blog.
When Jakkal joined Microsoft in July 2020, in the midst of the pandemic, Microsoft research estimated that there were 567 password-related attacks underway every second, globally.
“The latest data is 1287 password attacks [per second],” she says.
“So you have this absolute proliferation of attacks, you have less and less time for our attacker for defenders to contain the escalation.”
AI has gradually been integrated into network, device and application cybersecurity in recent years, but the arrival of ChatGPT-like functionality in a console connected to an entire security environment is a new development. Given the ubiquity of Defender and Sentinel in businesses across the world, that has major implications for how cybersecurity issues are researched, detected and addressed.
Security Copilot allows the type of conversational AI interaction that has aided in the generation of college essays and computer code. But it goes further.
“It's grounded and trained by our 65 trillion signals that we see every single day and also by human threat intelligence,” says Jakkal.
“It’s domain-specific, trained on cyber skills, investigation and hunting and reverse engineering skills.”
Using text prompts, users can ask Security Copilot research questions about cyber threats, but also ask for updates on your “entire digital estate”, Jakkal says. You can also paste a URL or snippet of code into the prompt window to ascertain its security status.
The Security Copilot console
“It will be able to leverage defenders Sentinel your cybersecurity tools and give you an answer and it will be able to interact with you in human language which is also really important,” Jakkal says.
“If you don’t have reverse engineering [skills], great. You can go click on a skill and choose that.”
A collaboration space in the Security Copilot allows you to pin information to show team members, speeding up investigations. If you don’t know what you are looking for, Security Pilot will prompt you for more information. It’s like a helpdesk chatbot for the people responsible for keeping an organisation secure in the digital world.
The ability to create a prompt book means individual members of a cyber team can standardise their prompts for best results, and Security Copilot creates an audit trail useful for evaluating how a cyber incident was dealt with.
The cyber talent shortage
But Security Pilot, like ChatGPT, will make mistakes, which users can offer feedback on to improve the model. But an organisation’s data used with Security Copilot doesn’t leave its digital estate - an important privacy and data security consideration.
Jakkal says cyber analysts have little to fear from Security Copilot and its automation of cybersecurity processes. According to the OECD, there is a worldwide shortage of cybersecurity workers amounting to 3.5 million workers. In New Zealand, says Jakkal, the cybersecurity worker shortage is 16.5 times what it was in 2013.
At the same time, the pace of cyber-attacks has increased. According to Microsoft’s research, the average time between an unsuspecting user clicking on a phishing link and entering log-in details, and an attacker gaining access to private data, is now one hour and 12 minutes.
“So you have this absolute proliferation of attacks you have less and less time for our attacker for defenders to contain the escalation,” she says.
In using automation of cybersecurity processes to tackle the skills shortage, Jakkal says it can also foster a more diverse cybersecurity workforce, by
“It's not that people don't want to choose cybersecurity, but it's hard for them to see a role for themselves,” she says.
“And many times, they don't know where to start because it is very technical. It's very complex.”
Microsoft has undertaken to train 250,000 cybersecurity professionals by 2025 and has partnered with Te Pūkenga aimed at underrepresented learners. Says Jakkal: “It's going to invite an entire population of diverse minority talent which never had a seat at the table before.”
Microsoft Security Copilot is now available in private preview with customers.
You must be logged in in order to post comments. Log In