My research focuses on the risk posed by "human behavior" while user uses security mechanisms. I am investigating authentication methods where the final decision is based on risk metrics and adjustable levels of confidence. Hence, actually designing and testing new behavioral authentication methods that aim at lowering potential security risks due to human behavior while user uses it.
Studies in the field of cyber attacks have found humans as the weakest link for most of the attacks in getting the access to critical systems. As a use case, studying the specific mechanism of user authentication securing the access to the system where a human is involved as an operator.
Existing authentication methods based on “something you know” and “something you have" is inherently binary (the level of confidence about the authenticity of the user must be 100% for the system to accept it). Particularly, I am focusing on behavioral biometrics because they are comparatively newer, user-friendly, and potentially well suited for new environments and contexts such as IoT devices and critical infrastructures. In addition, overall risk computation on which authentication decision is taken inherently asserts user behavior.
Also, I will focus on the task of applying threat model to the new authentication systems and making it resilient against possible vulnerabilities and cyber-threats specifically due to human factors.