Thesis Proposal: Towards Resilient Cyber Networks against Adversarial Attacks
Mon 12.05.22
2:00P EST | 3 Hour Event
Thesis Proposal: Towards Resilient Cyber Networks against Adversarial Attacks
Mon 12.05.22
2:00P EST
3 Hour Event
Mon 12.05.22
2:00P EST | 3 Hour Event
Mon 12.05.22
2:00P EST
3 Hour Event
Mon 12.05.22
2:00P EST | 3 Hour Event
Mon 12.05.22
2:00P EST
3 Hour Event
ABSTRACT:
With current technological advances, cyberattacks in networks are multifaceted and highly diverse. They are constantly improving, becoming much more dangerous and less recognizable by existing defense algorithms. Recently, the number of victim organizations and individuals has been growing significantly, causing massive damage worldwide, such as financial losses, data leakage, and disruption of essential services. Unfortunately, existing countermeasures and solutions often fail in the face of advanced adversaries. Under such conditions, the security of cybernetworks should be extensively studied, and new, improved protection methods should be proposed. Recently, much attention has been paid to machine learning algorithms, which should help improve existing security measures. However, these algorithms are also vulnerable in the presence of attacks against machine learning models. Therefore, security professionals must consider assessing how trustworthy machine learning solutions appear. In my dissertation, I propose several methods that analyze and increase the resilience of cybernetworks against attacks. The first step is understanding the attack behavior, therefore, I model one of the most devastating cyberattacks called self-propagating malware based on real-world data using compartmental models of epidemiology. Additionally, I extract the conditions necessary for the malware to become epidemic. Next, I propose new defenses against malware incorporating the topological structure of cybernetworks and compare their success with existing defense mechanisms using real-world communication graphs of large enterprise networks. Finally, I study the robustness of neural networks that classify malicious activity in cybernetworks by designing feasible evasion attack algorithms in cybersecurity constrained environments. The techniques that I propose can be used in conjunction with existing defenses and have the potential to increase the resilience of cyber networks against sophisticated attacks.
ABOUT THE SPEAKER:
Alesia Chernikova is a Ph.D. candidate at the Khoury College of Computer Sciences, Northeastern University, advised by Dr. Alina Oprea. She received her Bachelor of Science degree in Applied Mathematics from Belarusian State University in 2014. Her research interests include designing algorithms to improve the resilience of cyber networks against self-propagating malware attacks and adversarial machine learning.
COMMITTEE:
ABSTRACT:
With current technological advances, cyberattacks in networks are multifaceted and highly diverse. They are constantly improving, becoming much more dangerous and less recognizable by existing defense algorithms. Recently, the number of victim organizations and individuals has been growing significantly, causing massive damage worldwide, such as financial losses, data leakage, and disruption of essential services. Unfortunately, existing countermeasures and solutions often fail in the face of advanced adversaries. Under such conditions, the security of cybernetworks should be extensively studied, and new, improved protection methods should be proposed. Recently, much attention has been paid to machine learning algorithms, which should help improve existing security measures. However, these algorithms are also vulnerable in the presence of attacks against machine learning models. Therefore, security professionals must consider assessing how trustworthy machine learning solutions appear. In my dissertation, I propose several methods that analyze and increase the resilience of cybernetworks against attacks. The first step is understanding the attack behavior, therefore, I model one of the most devastating cyberattacks called self-propagating malware based on real-world data using compartmental models of epidemiology. Additionally, I extract the conditions necessary for the malware to become epidemic. Next, I propose new defenses against malware incorporating the topological structure of cybernetworks and compare their success with existing defense mechanisms using real-world communication graphs of large enterprise networks. Finally, I study the robustness of neural networks that classify malicious activity in cybernetworks by designing feasible evasion attack algorithms in cybersecurity constrained environments. The techniques that I propose can be used in conjunction with existing defenses and have the potential to increase the resilience of cyber networks against sophisticated attacks.
ABOUT THE SPEAKER:
Alesia Chernikova is a Ph.D. candidate at the Khoury College of Computer Sciences, Northeastern University, advised by Dr. Alina Oprea. She received her Bachelor of Science degree in Applied Mathematics from Belarusian State University in 2014. Her research interests include designing algorithms to improve the resilience of cyber networks against self-propagating malware attacks and adversarial machine learning.
COMMITTEE:
ABSTRACT:
With current technological advances, cyberattacks in networks are multifaceted and highly diverse. They are constantly improving, becoming much more dangerous and less recognizable by existing defense algorithms. Recently, the number of victim organizations and individuals has been growing significantly, causing massive damage worldwide, such as financial losses, data leakage, and disruption of essential services. Unfortunately, existing countermeasures and solutions often fail in the face of advanced adversaries. Under such conditions, the security of cybernetworks should be extensively studied, and new, improved protection methods should be proposed. Recently, much attention has been paid to machine learning algorithms, which should help improve existing security measures. However, these algorithms are also vulnerable in the presence of attacks against machine learning models. Therefore, security professionals must consider assessing how trustworthy machine learning solutions appear. In my dissertation, I propose several methods that analyze and increase the resilience of cybernetworks against attacks. The first step is understanding the attack behavior, therefore, I model one of the most devastating cyberattacks called self-propagating malware based on real-world data using compartmental models of epidemiology. Additionally, I extract the conditions necessary for the malware to become epidemic. Next, I propose new defenses against malware incorporating the topological structure of cybernetworks and compare their success with existing defense mechanisms using real-world communication graphs of large enterprise networks. Finally, I study the robustness of neural networks that classify malicious activity in cybernetworks by designing feasible evasion attack algorithms in cybersecurity constrained environments. The techniques that I propose can be used in conjunction with existing defenses and have the potential to increase the resilience of cyber networks against sophisticated attacks.
ABOUT THE SPEAKER:
Alesia Chernikova is a Ph.D. candidate at the Khoury College of Computer Sciences, Northeastern University, advised by Dr. Alina Oprea. She received her Bachelor of Science degree in Applied Mathematics from Belarusian State University in 2014. Her research interests include designing algorithms to improve the resilience of cyber networks against self-propagating malware attacks and adversarial machine learning.
COMMITTEE:
ABSTRACT:
With current technological advances, cyberattacks in networks are multifaceted and highly diverse. They are constantly improving, becoming much more dangerous and less recognizable by existing defense algorithms. Recently, the number of victim organizations and individuals has been growing significantly, causing massive damage worldwide, such as financial losses, data leakage, and disruption of essential services. Unfortunately, existing countermeasures and solutions often fail in the face of advanced adversaries. Under such conditions, the security of cybernetworks should be extensively studied, and new, improved protection methods should be proposed. Recently, much attention has been paid to machine learning algorithms, which should help improve existing security measures. However, these algorithms are also vulnerable in the presence of attacks against machine learning models. Therefore, security professionals must consider assessing how trustworthy machine learning solutions appear. In my dissertation, I propose several methods that analyze and increase the resilience of cybernetworks against attacks. The first step is understanding the attack behavior, therefore, I model one of the most devastating cyberattacks called self-propagating malware based on real-world data using compartmental models of epidemiology. Additionally, I extract the conditions necessary for the malware to become epidemic. Next, I propose new defenses against malware incorporating the topological structure of cybernetworks and compare their success with existing defense mechanisms using real-world communication graphs of large enterprise networks. Finally, I study the robustness of neural networks that classify malicious activity in cybernetworks by designing feasible evasion attack algorithms in cybersecurity constrained environments. The techniques that I propose can be used in conjunction with existing defenses and have the potential to increase the resilience of cyber networks against sophisticated attacks.
ABOUT THE SPEAKER:
Alesia Chernikova is a Ph.D. candidate at the Khoury College of Computer Sciences, Northeastern University, advised by Dr. Alina Oprea. She received her Bachelor of Science degree in Applied Mathematics from Belarusian State University in 2014. Her research interests include designing algorithms to improve the resilience of cyber networks against self-propagating malware attacks and adversarial machine learning.
COMMITTEE: