In his research on deception, Jeff Hancock often refers to the Truth-Bias, formally recognized by Levine, McCornack, and Park in 1999. In essence, people have a higher tendency to believe other people, particularly via email as there is a permanent record of the conversation. Unfortunately, people are only able to detect lies about 50% of the time, which is equivalent to a coin toss. What are the implications for Identity Governance and Administration (IGA), Identity and Access Management (IAM) and Privileged Access Management (PAM) programs, all of which often incorporate email or other permanent logs of access requests?
The first implication is that recertification efforts with a default denial or revocation stance and an easy ‘approve’ button are going to lead to certification fatigue by supervisors with no meaningful reduction in the number of accounts with access. Under this all-too-common scenario, organizations send out an online form to supervisors to show who has access to what on a quarterly, bi-annual or annual basis. The default stance is that if the supervisor takes no action, the users lose their access. The issue is that the supervisors trust that the users should continue to have access solely because the users previously had access. Also, it is hard and time-consuming to ask everyone if they still need access, and users do not like losing access.
The second implication is around new access requests. Consider a workflow scenario where a systems administrator asks for access to another system, and that ticket is sent to a supervisor for approval. The truth-bias is that the supervisor will approve the request, probably without much consideration for why the administrator needs access to that system. The challenge is that access grants are typically permanent and not revoked, leading to a high degree of access sprawl for administrative staff. Although this is not necessarily a problem in isolation, a compromised administrator’s account and the ability to request access to additional systems is a threat, which is particularly true if the administrator’s boss tends to rubber-stamp access requests. This can similarly lead to problems of developers having permanent access to production systems.
The final implication affects privilege requests. The tendency among PAM systems is to grant full access to impersonate a privileged account, such as root or a Domain Administrator account. It is more likely again for an administrator’s supervisor to approve privileged access requests because of the truth bias – they have requested the access, and we trust them. Therefore the request is approved. It is similarly less likely for anyone to monitor the use of that privileged account by the administrator as he or she is seen as trustworthy.
These are all sticky problems that deal with human behavior rather than systems or specific technologies. There are no guaranteed fixes to these – rather, there are strategies to mitigate and reduce the problems associated with truth bias.
Recertification programs should not measure their effectiveness with the total number of users with access to systems. That metric just shows the status quo. Rather, recertification programs should show a reduction in the number of login privileges for users to systems on a regular basis. This metric must be measured separately than the total number of users with access to systems to be effective. For example, an organization that sees a 10% increase in the number of users with access to systems in the same time frame where recertification removes the access of 10% the users would show a flat line for effectiveness. The key data to focus on is the number of accesses removed. However, this must be tempered with the number of immediate access requests to the same systems – if Sandra loses access to a system on Tuesday and then immediately requests access to it again on Wednesday, the recertification process still is not working.
There should be a separation of duties boundary between the department of the users who are requesting access and those who approve the access. Ideally, there should also be some common sense coded lightly into the access request system – an actuary is unlikely to need access to the production database server, and the request should be sent to a second-level approver at a minimum, or introduce step-up authentication. A systems administrator requesting access to a system should see an approval from the business unit that owns that system.
Finally, privileged access should be for specific commands whenever possible, and those commands should be evaluated separately by a security analyst. Consider the innocuous ‘find’ command on the UNIX platform. Although find is intended to find files, it also can run arbitrary commands, such as the ability to tar those files together. While this is intended to help make backups easier, a malicious actor can use find to exfiltrate data and delete or modify system files accessible by the account running the command. This issue is not unique to UNIX; privileged access requests should be carefully vetted for potential risks and not rubber-stamped by default. Similarly, execution of privileged commands should be monitored real-time rather than by a SIEM tool, with an option to add friction in the case of unusual systems activity. For example, step-up authentication requests when executing a privileged command help to reduce the threat of automated attacks using compromised credentials and an account with privilege.
Organizations should determine which actions they can take in the shortest duration that will have the greatest reduction in risk to their organization. With careful thought, these changes are less likely to result in unintended and unwanted consequences due to the truth-bias.