"You can only turn off this setting 3 times a year."
Astonishing. They clearly feel their users have no choice but to accept this onerous and ridiculous requirement. As if users wouldn't understand that they'd have to go way out of their way to write the code which enforces this outcome. All for a feature which provides me dubious benefit. I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?
Privacy legislation is clearly lacking. This type of action should bring the hammer down swiftly and soundly upon these gross and inappropriate corporate decision makers. Microsoft has needed that hammer blow for quite some time now. This should make that obvious. I guess I'll hold my breath while I see how Congress responds.
Actually, most users probably don't understand, that this ridiculous policy is more effort to implement. They just blindly follow whatever MS prescribes and have long given up on making any sense of the digital world.
It's hilarious that they actually say that right on the settings screen. I wonder why they picked 3 instead of 2 or 4. Like, some product manager actually sat down and thought about just how ridiculous they could be and have it still be acceptable.
I agree with you but there's nothing astonishing about any of this unfortunately, it was bound to happen. Almost all of cautionary statements about AI abuse fall on deaf ears of HN's overenthusiastic and ill-informed rabble, stultified by YC tech lobbyists.
Worst part about it was all the people fretting on about ridiculous threats like the chatbot turning into skynet sucked the oxygen out of the room for the more realistic corporate threats
Right. But then the AI firms did that deliberately, didn't they? Started the big philosophical argument to move the focus away from the things they were doing (epic misappropriation of intellectual property) and the very things their customers intended to do: fire huge numbers of staff on an international, multi-industry scale, replace them with AI, and replace already limited human accountability with simple disclaimers.
The biggest worry would always be that the tools would be stultifying and shit but executives would use them to drive layoffs on an epic scale anyway.
And hey now here we are: the tools are stultifying and shit, the projects have largely failed, and the only way to fix the losses is: layoffs.
Yes, any non E2EE cloud storage system has strict scanning for CSAM. And it's based on perceptual hashes, not AI (because AI systems can be tricked with normal-looking adversarial images pretty easily)
I built a similar photo ID system, not for this purpose or content, and the idea of platforms using perceptual hashes to potentially ruin people's lives is horrifying.
Depending on the algorithm and parameters, you can easily get a scary amount of false positives, especially using algorithms that shrink images during hashing, which is a lot of them.
I thought Apple’s approach was very promising. Unfortunately, instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked and the conversation was dominated by uninformed outrage about things that weren’t happening.
I imagine you'd add more heuristics and various types of hashes? If the file is just sitting there, rarely accessed and unshared, or if the file only triggers on 2/10 hashes, it's probably a false alarm. If the file is on a public share, you can probably run an actual image comparison...
They could have avoided the negative press by changing the requirement to be that you can’t re-enable the feature after switching it off 3 times per year.
It’s not hard to guess the problem: Steady state operation will only incur scanning costs for newly uploaded photos, but toggling the feature off and then on would trigger a rescan of every photo in the library. That’s a potentially very expensive operation.
If you’ve ever studied user behavior you’ve discovered situations where users toggle things on and off in attempts to fix some issue. Normally this doesn’t matter much, but when a toggle could potentially cost large amounts of compute you have to be more careful.
For the privacy sensitive user who only wants to opt out this shouldn’t matter. Turn the switch off, leave it off, and it’s not a problem. This is meant to address the users who try to turn it off and then back on every time they think it will fix something. It only takes one bad SEO spam advice article about “How to fix _____ problem with your photos” that suggests toggling the option to fix some problem to trigger a wave of people doing it for no reason.
And not just advertising. If ICE asks Microsoft to identify accounts of people who have uploaded a photo of "Person X", do you think they're going to decline?
They'd probably do it happily even without a warrant.
I'd bet Microsoft is doing this more because of threats from USG than because of advertising revenue.
> They'd probably do it happily even without a warrant
I'm old enough to remember when companies were tripping over themselves after 9/11 trying to give the government anything they could to help them keep an eye on Americans. They eventually learned to monetize this, and now we have the surveillance economy.
This is such a norm in society now; PR tactics take priority over any notion of accountability, and most journalists and publishers act as stenographers, because challenging or even characterizing the PR line is treated as an unjustified attack and inflated claims of bias.
Just as linking to original documents, court filings etc. should be a norm in news reporting, it should also be a norm to summarize PR responses (helpful, dismissive, evasive or whatever) and link to a summary of the PR text, rather than treating it as valid body copy.
People need to treat PR like they do AIs. "You utterly failed to answer the question, try again and actually answer the question I asked this time." I'd love to see corporate representatives actually pressed to answer. "Did you actually do X, yes or no, if you dodge the question I'll present you as dodging the question and let people assume the worst."
They take people for idiots. This can work a few times, but even someone who isn't the brightest will eventually put two and two together when they get screwed again and again and again.
> and follow Microsoft's compliance with General Data Protection Regulation
Not in a million years. See you in court. As often, just because a press statement says something, it's not necessarily true and maybe only used to defuse public perception.
Truly bizarre. I'm so glad I detached from Windows a few years back, and now when I have to use it or another MS product (eg an Xbox) it's such an unpleasant experience, like notification hell with access control checks to read the notifications.
The sad thing is that they've made it this way, as opposed to Windows being inherently deficient; it used to be a great blend of GUI convenience with ready access to advanced functionality for those who wanted it, whereas MacOS used to hide technical things from a user a bit too much and Linux desktop environments felt primitive. Nowadays MS seems to think of its users as if they were employees or livestock rather than customers.
Growing up, Microsoft dominance felt so strong. 3 decades later, there’s a really high chance my kids will never own or use a windows machine (unless their jobs gives them one).
Microsoft hate was something else in the '90s and 2000s. Yet people stayed with it as if they had no choice while OS/2, AmigaOS, NextStep, BeOS and all those UNIXes died.
an employer requires their workers to use Windows; the target audience for Windows is management, their HR and attorneys, and then greater security services. MSFT sells investigative services.
I think the EU is flawed in more ways that just one. But every time I see „<AI feature> will be available starting now outside EU“ I am really grateful
Insider here, in m365 though not onedrive. It did change, but not because of satya ; because of rules and legislation and bad press. Privacy and security are taken very seriously (at least by people who care to follow internal rules) not because "we're nice", but because
- EU governments keep auditing us, so we gotta stay on our toes, do things by the book, and be auditable
- it's bad press when we get caught doing garbage like that. And bad press is bad for business
In my org, doing anything with customer's data that isn't directly bringing them value is theoretically not possible. You can't deliver anything that isn't approved by privacy.
Don't forget that this is a very big company. It's composed of people who actually care and want to do the right thing, people who don't really care and just want to ship and would rather not be impeded by compliance processes, and people who are actually trying to bypass these processes because they'd sell your soul for a couple bucks if they could. For the little people like me, the official stance is that we should care about privacy very much.
Our respect for privacy was one of the main reasons I'm still there. There has been a good period of time where the actual sentiment was "we're the good guys", especially when comparing to google and Facebook. A solid portion of that was that our revenue was driven by subscriptions rather than ads. I guess the appeal to take customer's money and exploit their data is too big. The kind of shit that will get me to leave.
> Our respect for privacy was one of the main reasons I'm still there. There has been a good period of time where the actual sentiment was "we're the good guys", especially when comparing to google and Facebook. A solid portion of that was that our revenue was driven by subscriptions rather than ads.
How long has MS been putting ads in the start menu?
Do we even think that was real? I think social media has been astroturfed for a long time now. If enough people make those claims, it starts to feel true even without evidence to support it.
Did they ever open source anything that really make you think "wow"? The best I could see was them "embracing" Linux, but embrace, extend, extinguish was always a core part of their strategy.
This week I have received numerous reminders from Microsoft to renew my Skype credit..
Everything I see from that company is farcical. Massive security lapses, lazy AI features with huge privacy flaws, steamrolling OS updates that add no value whatsoever, and heavily relying on their old playbook of just buying anything that looks like it could disrupt them.
P.S. The skype acquisition was $8.5B in 2011 (That's $12.24B in today's money.)
Does this mean that when you disable, all labels are deleted, and when you turn it back on it has to re-scan all of your photos? Could this be a cost-saving measure?
I don't really see the issue. If you don't want the face recognition feature, then you'll turn it off once, and that's that. Maybe if you're unsure, you might turn it off, and then back on, and then back off again. But what's the use case where you'd want to do this more than 3x per year?
Presumably, it's somewhat expensive to run face recognition on all of your photos. When you turn it off, they have to throw away the index (they'd better be doing this for privacy reasons), and then rebuild it from scratch when you turn the feature on again.
If this is the true reason, then they have made some poor decisions throughout that still deserve criticism. Firstly by restricting the number of times you can turn it _off_ rather than _on_, secondly by not explaining the reason in the linked pages, and thirdly by having their publicist completely refuse to say a word on the matter.
In fact, if you follow the linked page, you'll find a screenshot showing it was originally worded differently, "You can only change this setting 3 times a year" dating all the way back to 2023. So at some point someone made a conscious decision to change the wording to restrict the number of times you can turn it _off_
Well, sometimes Microsoft decides to change your settings back. This has happened to me very frequently after installing Windows updates. I remember finding myself turning the same settings off time and again.
The "fuck you, user!" behavior of software companies now means there's no more "No", only "Maybe later". Every time I update Google Photos, it shows me the screen that "Photos backups are not turned on! Turn on now?" (because they want to upsell their paid storage space option).
Silicon Valley companies are like a creepy guy in the nightclub going up to each woman and asking "Want to dance? [Yes] or [Ask Me Again]". The desperation is pathetic.
> If you don't want the face recognition feature, then you'll turn it off once.
The issue is that is a feature that 100% should in any sane world be opt in - not opt out.
Microsoft privacy settings are a case of - “It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard.”
Even KDE's Digikam can run "somewhat expensive" algorithms on your photos without melting your PC and making you wait a year to recognize and label faces.
Even my 10(?) year old iPhone X can do facial recognition and memory extraction on device while charging.
My Sony A7-III can detect faces in real time, and discriminate it from 5 registered faces to do focus prioritization the moment I half-press the shutter.
That thing will take mere minutes on Azure when batched and fed through GPUs.
If my hunch is right, the option will have a "disable AI use for x months" slider and will turn itself on without letting you know. So you can't opt out of it completely, ever.
> When you turn it off, they have to throw away the index (they'd better be doing this for privacy reasons), and then rebuild it from scratch when you turn the feature on again
This is probably the case. But Redmond being Redmond, they put their foot in their mouth by saying "you can only turn off this setting 3 times a year" (emphasis mine).
But that's not necessarily true for everyone. And it doesn't need to be this way, either.
For starters I think it'd help if we understood why they do this. I'm sure there's a cost to the compute MS spends on AI'ing all your photos, turning it off under privacy rules means you need to throw away that compute. And turning it back on creates an additional cost for MS, that they've already spent for nothing. Limiting that makes sense.
What doesn't make sense is that I'd expect virtually nobody to turn it on and off over and over again, beyond 3 times, to the point that cost increases by more than a rounding error... like what type of user would do that, and why would that type of user not be exceedingly rare?
And even in that case, it'd make more sense to do it the other way around: you can turn on the feature 3 times per year, and off anytime. i.e. if you abuse it, you lose out on the feature, not your privacy.
So I think it is an issue that could and should be quickly solved.
Right, while I understand the potential compute cost, it would be like the iPhone restricting the number of times you could use “allow once“ for location permissions.
The point is it’s sucking your data into some amorphous big brother dataset without explicitly asking you if you want that to happen first. Opt out AI features are generally rude, trashy, low-class, money grubbing data grabs
Presumably, it's somewhat expensive to run face recognition on all of your photos.
Very likely true, but we shouldn't have to presume. If that's their motivation, they should state it clearly up front and make it opt-out by default. They can put a (?) callout on the UI for design decisions that have external constraints.
I wonder if it's possible to encrypt the index with a key that's copied to the user's device, and if the user wants to turn off this setting, delete the key on the server. When they want to turn it back on, the device uploads the key. Yes, the key might end up gone if there's a reinstall, etc.
If the user leaves it off for a year, then delete the encrypted index from the server...
"When this feature is disabled, facial recognition will be disabled immediately and existing recognition data will be purged within 60 days". Then you don't need a creepy message. Okay, so that's 6 times a year, but whatever.
How hard it to turn it on? Does it show a confirmation message?
My wife has a phone with a button on the side that opens the microphone to ask questions to Google. I guess 90% of the audios they get are "How the /&%/&#"% do I close this )(&(/&(%)?????!?!??"
I bought a new Motorola phone and there are no less than three ways to open Google assistant (side button, hold home button, swipe from corner). Took me about 10 seconds before I triggered it unintentionally and quickly figured out how to disable all of them...
Almost feel like we are getting to class action or antitrust when you connect the dots. Almost all PCs come with Windows. Defacto you need to create a M$ account to use Windows locally. They opt you into one drive by default. They sync your docs by default. They upload all your photos into AI by default.
Sounds like this advice will be expiring along with the next Windows update, so if you want a local account your window of opportunity may be closing. (What happens when you need to get a new PC?)
I was quite happy for a couple years to just use windows and wsl. Fully switched to Linux at home and Linux VM's at work. The thirst and desperation to make AI work gives me the creeps more than usual.
How is this not a revenge porn or something? If I upload sensitive photos somewhere, it is 5 years prison sentence! CEO of Microsoft can do that billion times!
This is once again strongly suggesting that Microsoft is thoroughly doomed if the money they've dumped into AI doesn't pan out. It seems to me that if your company is tied to Microsoft's cloud platform, you should probably consider moving away as quickly as you can. Paying the vmware tax and moving eveyrthing in house is probably a better move at this point.
It really seems as though Microsoft has total contempt for their retail/individual customers. They do a lot to inconvenience those users, and it often seems gratuitous and unnecessary. (As it does in this case.)
...I guess Microsoft believes that they're making up for it in AI and B2B/Cloud service sales? Or that customers are just so locked-in that there's genuinely no alternative? I don't believe that the latter is true, and it's hard to come back from a badly tarnished brand. Won't be long before the average consumer hates Microsoft as much as they hate HP (printers).
> Slashdot: What's the reason OneDrive tells users this setting can only be turned off 3 times a year? (And are those any three times — or does that mean three specific days, like Christmas, New Year's Day, etc.)
> [Microsoft's publicist chose not to answer this question.]
Whenever I have to use Windows, I just create a new throwaway account on proton, connect it to the mother throwaway account connected to a yahoo email account created in the before times, install what I need, and then never access that account again.
For private repos there is Forgejo, Gitea and Gitlab.
For open-source: Codeberg
Yes, it'll make projects harder to discover, because you can't assume that "everything is on github" anymore. But it is a small price to pay for dignity.
> Microsoft only lets you opt out of AI photo scanning
Their _UI_ says they let you opt out. I wouldn't bet on that actually being the case. At the very least - a copy of your photos goes to the US government, and they do whatever they want with it.
Isn't it cute when there's absolutely no rationale behind a new rule, and it's simply an incursion made in order to break down a boundary?
Look, scanning with AI is available!
Wow, scanning with AI is now free for everyone!
What? Scanning with AI is now opt-out?
Why would opting-out be made time-limited?
WTF, what's so special about 3x a year? Is it because it's the magic number?
Ah, the setting's gone again, I guess I can relax. I guess the market wanted this great feature, or else they wouldn't have gradually forced it on us. Anyway, you're a weird techie for noticing it. What do you have to hide?
There is a big rationale behind it. If their AI investments don't pan out, Microsoft will cease to exist. They've been out of ideas since the late 90s. They know that the subscription gravy train has already peaked. There is no more growth unless they fabricate new problems for which they will then force you to pay for the solution to the problem they created for you. Oh, your children were kidnapped because Microsoft sold their recognition and location data to kidnappers? Well you should have paid for Microsoft's identity protection E7 plus add-on subscription that prevents them from selling the data you did not authorize them to collect to entities that they should know better than to deal with.
I don't even get why they would need "ideas" or "growth" tbh. They have most popular desktop operating system, they have one of the most popular office suites, surely they make plenty of profit from those. If they just focused on making their existing products not shit they would remain a profitable company indefinitely. But instead they're enshittifying everything because they want more More MORE
Because there are too many people chasing of ever going up line on valuation chart. It is simply not acceptable anymore to have reasonable business that generates solid dividends and grows with opening markets and population. Blame the silicon valley, VC and like...
"You can only turn off this setting 3 times a year."
Astonishing. They clearly feel their users have no choice but to accept this onerous and ridiculous requirement. As if users wouldn't understand that they'd have to go way out of their way to write the code which enforces this outcome. All for a feature which provides me dubious benefit. I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?
Privacy legislation is clearly lacking. This type of action should bring the hammer down swiftly and soundly upon these gross and inappropriate corporate decision makers. Microsoft has needed that hammer blow for quite some time now. This should make that obvious. I guess I'll hold my breath while I see how Congress responds.
Favebook introducing photo tagging was when I exited Facebook.
This was pre-AI hype, perhaps 15 years ago. It seems Microsoft feel it is normalised. More you are their product. It strikes me as great insecurity.
Actually, most users probably don't understand, that this ridiculous policy is more effort to implement. They just blindly follow whatever MS prescribes and have long given up on making any sense of the digital world.
It's hilarious that they actually say that right on the settings screen. I wonder why they picked 3 instead of 2 or 4. Like, some product manager actually sat down and thought about just how ridiculous they could be and have it still be acceptable.
> I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?
Presumably it can be used for filtering as well - find me all pictures of me with my dad, etc.
Sure but if it was for your benefit, not theirs, they wouldn't force it on you.
I agree with you but there's nothing astonishing about any of this unfortunately, it was bound to happen. Almost all of cautionary statements about AI abuse fall on deaf ears of HN's overenthusiastic and ill-informed rabble, stultified by YC tech lobbyists.
Worst part about it was all the people fretting on about ridiculous threats like the chatbot turning into skynet sucked the oxygen out of the room for the more realistic corporate threats
Right. But then the AI firms did that deliberately, didn't they? Started the big philosophical argument to move the focus away from the things they were doing (epic misappropriation of intellectual property) and the very things their customers intended to do: fire huge numbers of staff on an international, multi-industry scale, replace them with AI, and replace already limited human accountability with simple disclaimers.
The biggest worry would always be that the tools would be stultifying and shit but executives would use them to drive layoffs on an epic scale anyway.
And hey now here we are: the tools are stultifying and shit, the projects have largely failed, and the only way to fix the losses is: layoffs.
My initial thoughts were so they could scan for csam while pretending as if users have a choice to not have their privacy violated.
From my understanding, CSAM scanning is always considered a separate, always on and mandatory subsystem in any cloud storage system.
Yes, any non E2EE cloud storage system has strict scanning for CSAM. And it's based on perceptual hashes, not AI (because AI systems can be tricked with normal-looking adversarial images pretty easily)
I built a similar photo ID system, not for this purpose or content, and the idea of platforms using perceptual hashes to potentially ruin people's lives is horrifying.
Depending on the algorithm and parameters, you can easily get a scary amount of false positives, especially using algorithms that shrink images during hashing, which is a lot of them.
I thought Apple’s approach was very promising. Unfortunately, instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked and the conversation was dominated by uninformed outrage about things that weren’t happening.
I imagine you'd add more heuristics and various types of hashes? If the file is just sitting there, rarely accessed and unshared, or if the file only triggers on 2/10 hashes, it's probably a false alarm. If the file is on a public share, you can probably run an actual image comparison...
I assume this would be a ... call it feature for now, so a feature not available in the EU due to GDPR violations.
They could have avoided the negative press by changing the requirement to be that you can’t re-enable the feature after switching it off 3 times per year.
It’s not hard to guess the problem: Steady state operation will only incur scanning costs for newly uploaded photos, but toggling the feature off and then on would trigger a rescan of every photo in the library. That’s a potentially very expensive operation.
If you’ve ever studied user behavior you’ve discovered situations where users toggle things on and off in attempts to fix some issue. Normally this doesn’t matter much, but when a toggle could potentially cost large amounts of compute you have to be more careful.
For the privacy sensitive user who only wants to opt out this shouldn’t matter. Turn the switch off, leave it off, and it’s not a problem. This is meant to address the users who try to turn it off and then back on every time they think it will fix something. It only takes one bad SEO spam advice article about “How to fix _____ problem with your photos” that suggests toggling the option to fix some problem to trigger a wave of people doing it for no reason.
You can really tell that Microsoft has adopted advertising as a major line of business.
The privacy violations they are racking up are very reminiscent of prior behavior we've seen from Facebook and Google.
And not just advertising. If ICE asks Microsoft to identify accounts of people who have uploaded a photo of "Person X", do you think they're going to decline?
They'd probably do it happily even without a warrant.
I'd bet Microsoft is doing this more because of threats from USG than because of advertising revenue.
> They'd probably do it happily even without a warrant
I'm old enough to remember when companies were tripping over themselves after 9/11 trying to give the government anything they could to help them keep an eye on Americans. They eventually learned to monetize this, and now we have the surveillance economy.
ICE don't have to ask for anything, the USG gets a copy of all data Microsoft collects from you, anyway. Remember:
https://www.pcmag.com/news/the-10-most-disturbing-snowden-re...
Did anyone notice that Microsoft never replied any of the asked questions, but deflected them?
They are exactly where I left them 20 years ago.
It's very sad that I can't stop using them again for doing this.
This is such a norm in society now; PR tactics take priority over any notion of accountability, and most journalists and publishers act as stenographers, because challenging or even characterizing the PR line is treated as an unjustified attack and inflated claims of bias.
Just as linking to original documents, court filings etc. should be a norm in news reporting, it should also be a norm to summarize PR responses (helpful, dismissive, evasive or whatever) and link to a summary of the PR text, rather than treating it as valid body copy.
People need to treat PR like they do AIs. "You utterly failed to answer the question, try again and actually answer the question I asked this time." I'd love to see corporate representatives actually pressed to answer. "Did you actually do X, yes or no, if you dodge the question I'll present you as dodging the question and let people assume the worst."
Challenging or even characterizing the PR line is usually treated as an unjustified attack to justify inflated claims of bias.
They take people for idiots. This can work a few times, but even someone who isn't the brightest will eventually put two and two together when they get screwed again and again and again.
It's not just PR tactics for the sake of accountability. It's because there's a glut of lawyers that'll sue for the tinest admission of anything.
They prevaricated all of their answers, and that itself is far more telling.
> and follow Microsoft's compliance with General Data Protection Regulation
Not in a million years. See you in court. As often, just because a press statement says something, it's not necessarily true and maybe only used to defuse public perception.
Truly bizarre. I'm so glad I detached from Windows a few years back, and now when I have to use it or another MS product (eg an Xbox) it's such an unpleasant experience, like notification hell with access control checks to read the notifications.
The sad thing is that they've made it this way, as opposed to Windows being inherently deficient; it used to be a great blend of GUI convenience with ready access to advanced functionality for those who wanted it, whereas MacOS used to hide technical things from a user a bit too much and Linux desktop environments felt primitive. Nowadays MS seems to think of its users as if they were employees or livestock rather than customers.
Growing up, Microsoft dominance felt so strong. 3 decades later, there’s a really high chance my kids will never own or use a windows machine (unless their jobs gives them one).
Do you remember this: http://toastytech.com/evil/index.html ?
Microsoft hate was something else in the '90s and 2000s. Yet people stayed with it as if they had no choice while OS/2, AmigaOS, NextStep, BeOS and all those UNIXes died.
an employer requires their workers to use Windows; the target audience for Windows is management, their HR and attorneys, and then greater security services. MSFT sells investigative services.
I was afraid for the EU economy, but after this declaration I'm reassured that Microsoft will pay for my grand kids' education in 30 years.
I think the EU is flawed in more ways that just one. But every time I see „<AI feature> will be available starting now outside EU“ I am really grateful
Microsoft in the past few years has totally lost it's mind, it's ruining nearly everything it touches and I can't understand why
They never changed. For some reason Satya became CEO and nerds fawned over the “new Microsoft” for whatever reason.
They are a hard nosed company focused with precision on dominance for themselves.
Insider here, in m365 though not onedrive. It did change, but not because of satya ; because of rules and legislation and bad press. Privacy and security are taken very seriously (at least by people who care to follow internal rules) not because "we're nice", but because
- EU governments keep auditing us, so we gotta stay on our toes, do things by the book, and be auditable - it's bad press when we get caught doing garbage like that. And bad press is bad for business
In my org, doing anything with customer's data that isn't directly bringing them value is theoretically not possible. You can't deliver anything that isn't approved by privacy.
Don't forget that this is a very big company. It's composed of people who actually care and want to do the right thing, people who don't really care and just want to ship and would rather not be impeded by compliance processes, and people who are actually trying to bypass these processes because they'd sell your soul for a couple bucks if they could. For the little people like me, the official stance is that we should care about privacy very much.
Our respect for privacy was one of the main reasons I'm still there. There has been a good period of time where the actual sentiment was "we're the good guys", especially when comparing to google and Facebook. A solid portion of that was that our revenue was driven by subscriptions rather than ads. I guess the appeal to take customer's money and exploit their data is too big. The kind of shit that will get me to leave.
> Our respect for privacy was one of the main reasons I'm still there. There has been a good period of time where the actual sentiment was "we're the good guys", especially when comparing to google and Facebook. A solid portion of that was that our revenue was driven by subscriptions rather than ads.
How long has MS been putting ads in the start menu?
Do we even think that was real? I think social media has been astroturfed for a long time now. If enough people make those claims, it starts to feel true even without evidence to support it.
Did they ever open source anything that really make you think "wow"? The best I could see was them "embracing" Linux, but embrace, extend, extinguish was always a core part of their strategy.
Money and power. Who was the first BigTech co on the Prism slides? Who muscled out competitors in the 90s?
This week I have received numerous reminders from Microsoft to renew my Skype credit..
Everything I see from that company is farcical. Massive security lapses, lazy AI features with huge privacy flaws, steamrolling OS updates that add no value whatsoever, and heavily relying on their old playbook of just buying anything that looks like it could disrupt them.
P.S. The skype acquisition was $8.5B in 2011 (That's $12.24B in today's money.)
Does this mean that when you disable, all labels are deleted, and when you turn it back on it has to re-scan all of your photos? Could this be a cost-saving measure?
In that case, they should make it the other way around — you can enable this only three times a year.
No, it's a profit-seeking measure.
They should do it the other direction, then: if you turn it off more than three times you can’t turn it back on.
But that's less good for profit. Why would they give up money for morals?
Esp. when you can just eat money to survive when you relocate to the Mars, no?
>Does this mean that when you disable, all labels are deleted
AHHAHAHAHAHAHAHAHA.
Ha.
Nice one.
I don't really see the issue. If you don't want the face recognition feature, then you'll turn it off once, and that's that. Maybe if you're unsure, you might turn it off, and then back on, and then back off again. But what's the use case where you'd want to do this more than 3x per year?
Presumably, it's somewhat expensive to run face recognition on all of your photos. When you turn it off, they have to throw away the index (they'd better be doing this for privacy reasons), and then rebuild it from scratch when you turn the feature on again.
If this is the true reason, then they have made some poor decisions throughout that still deserve criticism. Firstly by restricting the number of times you can turn it _off_ rather than _on_, secondly by not explaining the reason in the linked pages, and thirdly by having their publicist completely refuse to say a word on the matter.
In fact, if you follow the linked page, you'll find a screenshot showing it was originally worded differently, "You can only change this setting 3 times a year" dating all the way back to 2023. So at some point someone made a conscious decision to change the wording to restrict the number of times you can turn it _off_
Well, sometimes Microsoft decides to change your settings back. This has happened to me very frequently after installing Windows updates. I remember finding myself turning the same settings off time and again.
The "fuck you, user!" behavior of software companies now means there's no more "No", only "Maybe later". Every time I update Google Photos, it shows me the screen that "Photos backups are not turned on! Turn on now?" (because they want to upsell their paid storage space option).
The lack of a true “no” option and only “maybe later” infuriates me.
Silicon Valley companies are like a creepy guy in the nightclub going up to each woman and asking "Want to dance? [Yes] or [Ask Me Again]". The desperation is pathetic.
> If you don't want the face recognition feature, then you'll turn it off once.
The issue is that is a feature that 100% should in any sane world be opt in - not opt out.
Microsoft privacy settings are a case of - “It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard.”
Even KDE's Digikam can run "somewhat expensive" algorithms on your photos without melting your PC and making you wait a year to recognize and label faces.
Even my 10(?) year old iPhone X can do facial recognition and memory extraction on device while charging.
My Sony A7-III can detect faces in real time, and discriminate it from 5 registered faces to do focus prioritization the moment I half-press the shutter.
That thing will take mere minutes on Azure when batched and fed through GPUs.
If my hunch is right, the option will have a "disable AI use for x months" slider and will turn itself on without letting you know. So you can't opt out of it completely, ever.
> When you turn it off, they have to throw away the index (they'd better be doing this for privacy reasons), and then rebuild it from scratch when you turn the feature on again
This is probably the case. But Redmond being Redmond, they put their foot in their mouth by saying "you can only turn off this setting 3 times a year" (emphasis mine).
Agreed, in practice for me there's no real issue.
But that's not necessarily true for everyone. And it doesn't need to be this way, either.
For starters I think it'd help if we understood why they do this. I'm sure there's a cost to the compute MS spends on AI'ing all your photos, turning it off under privacy rules means you need to throw away that compute. And turning it back on creates an additional cost for MS, that they've already spent for nothing. Limiting that makes sense.
What doesn't make sense is that I'd expect virtually nobody to turn it on and off over and over again, beyond 3 times, to the point that cost increases by more than a rounding error... like what type of user would do that, and why would that type of user not be exceedingly rare?
And even in that case, it'd make more sense to do it the other way around: you can turn on the feature 3 times per year, and off anytime. i.e. if you abuse it, you lose out on the feature, not your privacy.
So I think it is an issue that could and should be quickly solved.
> what's the use case where you'd want to do this more than 3x per year?
That means that all Microsoft has to do to get your consent to scan photos is turn the setting on every quarter.
To prevent you from having the option to temporarily disable it, so you have to choose between privacy and the supposed utility
Right, while I understand the potential compute cost, it would be like the iPhone restricting the number of times you could use “allow once“ for location permissions.
The point is it’s sucking your data into some amorphous big brother dataset without explicitly asking you if you want that to happen first. Opt out AI features are generally rude, trashy, low-class, money grubbing data grabs
Presumably, it's somewhat expensive to run face recognition on all of your photos.
Very likely true, but we shouldn't have to presume. If that's their motivation, they should state it clearly up front and make it opt-out by default. They can put a (?) callout on the UI for design decisions that have external constraints.
I wonder if it's possible to encrypt the index with a key that's copied to the user's device, and if the user wants to turn off this setting, delete the key on the server. When they want to turn it back on, the device uploads the key. Yes, the key might end up gone if there's a reinstall, etc.
If the user leaves it off for a year, then delete the encrypted index from the server...
"When this feature is disabled, facial recognition will be disabled immediately and existing recognition data will be purged within 60 days". Then you don't need a creepy message. Okay, so that's 6 times a year, but whatever.
How hard it to turn it on? Does it show a confirmation message?
My wife has a phone with a button on the side that opens the microphone to ask questions to Google. I guess 90% of the audios they get are "How the /&%/&#"% do I close this )(&(/&(%)?????!?!??"
I bought a new Motorola phone and there are no less than three ways to open Google assistant (side button, hold home button, swipe from corner). Took me about 10 seconds before I triggered it unintentionally and quickly figured out how to disable all of them...
So why not limit how many times you can turn it on, instead of off?
We all know why.
Assuming this reasoning is accurate, why not just silently throw a rate limit error and simply not reenable it if it's repeatedly switched on and off?
This made me look up if you can disable iOS photo scanning and you can’t. Hmm.
Fedora with vanilla Gnome is excellent for anyone looking for an alternative.
Microsoft: forces OneDrive on users via dark pattern dialogs that many users just accept
Users: save files "on their PC" (they think)
Microsoft: Rolls out AI photo-scanning feature to unknowing users intending to learn something.
Users: WTF? And there are rules on turning it on and off?
Microsoft: We have nothing more to share at this time.
Favorite quote from the article:
> [Microsoft's publicist chose not to answer this question.]
Almost feel like we are getting to class action or antitrust when you connect the dots. Almost all PCs come with Windows. Defacto you need to create a M$ account to use Windows locally. They opt you into one drive by default. They sync your docs by default. They upload all your photos into AI by default.
You can use Windows without a Microsoft account, but the dark pattern to do this is very difficult to navigate.
Sounds like this advice will be expiring along with the next Windows update, so if you want a local account your window of opportunity may be closing. (What happens when you need to get a new PC?)
Tell them "you may only refuse to answer this question 3 times a year".
It's totally worth self hosting files, it's gotten much better.
I was quite happy for a couple years to just use windows and wsl. Fully switched to Linux at home and Linux VM's at work. The thirst and desperation to make AI work gives me the creeps more than usual.
How is this not a revenge porn or something? If I upload sensitive photos somewhere, it is 5 years prison sentence! CEO of Microsoft can do that billion times!
This is once again strongly suggesting that Microsoft is thoroughly doomed if the money they've dumped into AI doesn't pan out. It seems to me that if your company is tied to Microsoft's cloud platform, you should probably consider moving away as quickly as you can. Paying the vmware tax and moving eveyrthing in house is probably a better move at this point.
Seems obvious they actually mean to limit the number of times you can opt in. Very poor choice of words.
The difference is whether you get locked into having it on or having it off at the end.
That’s not opt out. Opt out is the ability to say no. If you’re not allowed to say no there’s no consent and you’re being forced.
If you opt out and then never turn it back on, you have opted out.
It really seems as though Microsoft has total contempt for their retail/individual customers. They do a lot to inconvenience those users, and it often seems gratuitous and unnecessary. (As it does in this case.)
...I guess Microsoft believes that they're making up for it in AI and B2B/Cloud service sales? Or that customers are just so locked-in that there's genuinely no alternative? I don't believe that the latter is true, and it's hard to come back from a badly tarnished brand. Won't be long before the average consumer hates Microsoft as much as they hate HP (printers).
Year of the Linux desktop edges ever closer.
> Slashdot: What's the reason OneDrive tells users this setting can only be turned off 3 times a year? (And are those any three times — or does that mean three specific days, like Christmas, New Year's Day, etc.)
> [Microsoft's publicist chose not to answer this question.]
Presumably you just need to turn it off once, right?
There's a great solution to this.
Just stop using Microsoft shit. It's a lot easier than untangling yourself from Google.
Is there a free platform that will let me blog like GitHub Pages works?
Yeah it is legitimately hard to avoid Google, if nothing else some of your emails will probably be leaked to Gmail.
But Microsoft is pretty easy to avoid after their decade of floundering.
Whenever I have to use Windows, I just create a new throwaway account on proton, connect it to the mother throwaway account connected to a yahoo email account created in the before times, install what I need, and then never access that account again.
It is fucked you almost need mob levels of burner cell precautions to have privacy and use Excel.
How can I play starcraft 2 without it?
Apparently it runs in Proton (I haven’t tried it though).
Starcraft 2 w/ Battlenet has been working on Linux for over a decade. You don’t even need Proton, it works lovely with WINE.
Yes. Just use Immich for photos. AI scanning, but local and only opt-in.
you mean like stop using GitHub?
Yes.
For private repos there is Forgejo, Gitea and Gitlab.
For open-source: Codeberg
Yes, it'll make projects harder to discover, because you can't assume that "everything is on github" anymore. But it is a small price to pay for dignity.
Why not put open-source projects on Gitlab?
Yes, that too.
Microsoft is such a scummy company. They always were but they've become even worse since they've gone all in on AI.
I wonder if this is also a thing for their EU users. I can think of a few laws this violates.
Crossposting slashdot?
Heaven forfend!
They are the ones who did this interview
Reminder: Microsoft owns Github and NPM.
Makes me want to download and install windows, and store a picture of my hairy brown nutsack with googly eyes on it.
I think a call to Australia’s privacy commissioner might be in order.
What are they gonna do? Hard to have a convo with your master when youre on your knees...
> I uploaded a photo on my phone to Microsoft's
That's your problem right there.
> Microsoft only lets you opt out of AI photo scanning
Their _UI_ says they let you opt out. I wouldn't bet on that actually being the case. At the very least - a copy of your photos goes to the US government, and they do whatever they want with it.
I've never seen a better case for uploading endless AI slop photos.
Isn't it cute when there's absolutely no rationale behind a new rule, and it's simply an incursion made in order to break down a boundary?
Look, scanning with AI is available!
Wow, scanning with AI is now free for everyone!
What? Scanning with AI is now opt-out?
Why would opting-out be made time-limited?
WTF, what's so special about 3x a year? Is it because it's the magic number?
Ah, the setting's gone again, I guess I can relax. I guess the market wanted this great feature, or else they wouldn't have gradually forced it on us. Anyway, you're a weird techie for noticing it. What do you have to hide?
There is a big rationale behind it. If their AI investments don't pan out, Microsoft will cease to exist. They've been out of ideas since the late 90s. They know that the subscription gravy train has already peaked. There is no more growth unless they fabricate new problems for which they will then force you to pay for the solution to the problem they created for you. Oh, your children were kidnapped because Microsoft sold their recognition and location data to kidnappers? Well you should have paid for Microsoft's identity protection E7 plus add-on subscription that prevents them from selling the data you did not authorize them to collect to entities that they should know better than to deal with.
I don't even get why they would need "ideas" or "growth" tbh. They have most popular desktop operating system, they have one of the most popular office suites, surely they make plenty of profit from those. If they just focused on making their existing products not shit they would remain a profitable company indefinitely. But instead they're enshittifying everything because they want more More MORE
Because there are too many people chasing of ever going up line on valuation chart. It is simply not acceptable anymore to have reasonable business that generates solid dividends and grows with opening markets and population. Blame the silicon valley, VC and like...