Thanks for this, it's very timely given what I'm working on right now. Google's proposal seems wildly overcomplicated for the use cases I've ever run across.
Glad to hear it. I feel the library could be improved, and if your server runs on something other than Node.js, you'll have to put together some straightforward crypto code, so feel free to file an issue on the repo[1] if you have any questions or requests. The point of it is not at all to compete with Google, but it could serve as a reasonable stopgap that's easy to implement (no new endpoints, no roundtrips) and should protect against all of today's cookie stealers, which would have to become a lot more sophisticated to beat it. I created a discussion on DBSC's spec repo yesterday that has a more direct comparison vs. Google's proposal[2] that you can check out.
How do you handle scenarios where the user’s device environment changes significantly? For example, if they clear their IndexedDB or switch devices? Does session-lock manage to maintain security in these cases?
If the browser loses the private key from IndexedDB, the session token will become invalid because it would no longer be able to be verified on the server. Basically, the user would get logged out in the same way as they would if they cleared out the session token by clearing cookies or LocalStorage.
httponly cookies are meant to prevent attacks like XSS by preventing access to them from client-side JS. However, they can still be stolen by malware on the device (there's a whole class of them called "cookie stealers"). Generally, they search through the infected machine's filesystem and pull out any cookies they find, or at least cookies that the attacker would be interested in. No client-side JS is required for this, so the httponly attribute doesn't help. There have also some browser extension-based cookie stealers that may work along similar principles. Take a look at this old open source stealer to get a sense of how they work: https://github.com/Alexuiop1337/SoranoStealer/tree/master/So...
Session-Lock and Chrome's DBSC are designed to combat these cookie stealers specifically. The premise is that even if an attacker exfiltrates the token itself, it would not be able to be used because the server would reject it if it is not signed by the correct private key when the network request is made. This private key can (or should) only exist on the legitimate device, not the attacker's machine. There may or may not be ways to extract the private key as well, but in any event, it would be a much more complicated attack.
Actually, Session-Lock does offer some protection against some MITM attacks in the form of a timeout that would be triggered with most MITM attacks, but its purpose (and that of Chrome's DBSC proposal) is to protect against cookie stealer malware, not MITM. This is malware that steals session tokens from the device's filesystem. Take a look here to understand the threat: https://blog.google/threat-analysis-group/phishing-campaign-...
The premise of Session-Lock and DBSC is that even if the token gets stolen, it would not be useful to the attacker because the server would reject it if it doesn't have the correct signature that's generated using a private key that should only exist on the legitimate device. This private key has to be difficult or borderline impossible for the attacker to exfiltrate, unlike the session token.
In an ideal world, the private key should be stored in an HSM, preventing exfiltration. However, even assuming an HSM, the current scheme doesn't protect against malicious actors pre-signing requests on the client and exfiltrating those requests.
This library adds more defense-in-depth, making it harder to attack sessions, but not impossible.
Thanks for this, it's very timely given what I'm working on right now. Google's proposal seems wildly overcomplicated for the use cases I've ever run across.
Glad to hear it. I feel the library could be improved, and if your server runs on something other than Node.js, you'll have to put together some straightforward crypto code, so feel free to file an issue on the repo[1] if you have any questions or requests. The point of it is not at all to compete with Google, but it could serve as a reasonable stopgap that's easy to implement (no new endpoints, no roundtrips) and should protect against all of today's cookie stealers, which would have to become a lot more sophisticated to beat it. I created a discussion on DBSC's spec repo yesterday that has a more direct comparison vs. Google's proposal[2] that you can check out.
[1]https://github.com/zainazeem/session-lock [2]https://github.com/WICG/dbsc/discussions
How do you handle scenarios where the user’s device environment changes significantly? For example, if they clear their IndexedDB or switch devices? Does session-lock manage to maintain security in these cases?
If the browser loses the private key from IndexedDB, the session token will become invalid because it would no longer be able to be verified on the server. Basically, the user would get logged out in the same way as they would if they cleared out the session token by clearing cookies or LocalStorage.
How is this better then an httponly cookie?
httponly cookies are meant to prevent attacks like XSS by preventing access to them from client-side JS. However, they can still be stolen by malware on the device (there's a whole class of them called "cookie stealers"). Generally, they search through the infected machine's filesystem and pull out any cookies they find, or at least cookies that the attacker would be interested in. No client-side JS is required for this, so the httponly attribute doesn't help. There have also some browser extension-based cookie stealers that may work along similar principles. Take a look at this old open source stealer to get a sense of how they work: https://github.com/Alexuiop1337/SoranoStealer/tree/master/So...
Session-Lock and Chrome's DBSC are designed to combat these cookie stealers specifically. The premise is that even if an attacker exfiltrates the token itself, it would not be able to be used because the server would reject it if it is not signed by the correct private key when the network request is made. This private key can (or should) only exist on the legitimate device, not the attacker's machine. There may or may not be ways to extract the private key as well, but in any event, it would be a much more complicated attack.
This would have been cool for hardware wallets when Ethereum was relevant.
doesn't this only protect against MITM attacks?
Actually, Session-Lock does offer some protection against some MITM attacks in the form of a timeout that would be triggered with most MITM attacks, but its purpose (and that of Chrome's DBSC proposal) is to protect against cookie stealer malware, not MITM. This is malware that steals session tokens from the device's filesystem. Take a look here to understand the threat: https://blog.google/threat-analysis-group/phishing-campaign-...
The premise of Session-Lock and DBSC is that even if the token gets stolen, it would not be useful to the attacker because the server would reject it if it doesn't have the correct signature that's generated using a private key that should only exist on the legitimate device. This private key has to be difficult or borderline impossible for the attacker to exfiltrate, unlike the session token.
If the user has malware can't that steal the private key as well? Why is it hard to exfil if the attacker has full access?
In an ideal world, the private key should be stored in an HSM, preventing exfiltration. However, even assuming an HSM, the current scheme doesn't protect against malicious actors pre-signing requests on the client and exfiltrating those requests.
This library adds more defense-in-depth, making it harder to attack sessions, but not impossible.