Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Monopolistic companies may not actively impose restrictions which harm others (includes businesses)

That's not generally how monopoly is interpreted in the US (although jurisprudence on this may be shifting). In general, the litmus test is consumer harm. A company is allowed to control 99% of the market if they do it by providing a better experience to consumers than other companies can; that's just "being successful." Microsoft ran afoul of antitrust because their browser sucked and embedding it in the OS made the OS suck too; if they hadn't tried to parlay one product into the other they would be unlikely to have run afoul of US antitrust law, and they haven't run afoul of it over the fact that 70-90% of x86 architecture PCs run Windows.

> Some restrictions are allowed, but the company must respond to an appeal of restrictions within X minutes; Appeals to the company can themselves be appealed to a governmental independent board which binds the company with no further review permitted; All delays and unreasonable responses incur punitive penalties as judged by the board; All penalties must be paid immediately

There may be meat on those bones (a general law restricting how browsers may operate in terms of rendering user content). Risky because it would codify into law a lot of ideas that are merely technical specifications (you can look to other industries to see the consequences of that, like how "five-over-ones" are cropping up in cities all over the US because they satisfy a pretty uniform fire and structural safety building code to the letter). But this could be done without invoking monopoly protection.

> If an action taken unilaterally by a company 1) harms someone AND 2) is automated: Then, that automation must be immediately, totally, and unconditionally reversed upon the unilateral request of the victim.

Too broad. It harms me when Google blocks my malware distribution service because I'm interested in getting malware on your machine; I really want your Bitcoin wallet passwords, you see. ;)

Most importantly: this whole topic is independent of monopolies. We could cut Chrome out of Google tomorrow and the exact same issues with safe browsing impeding new sites with malware-ish shapes would exist (with the only change probably being the false positive rate would go up, since a Chrome cut off from Google would have to build out its detection and reporting logic from scratch without relying on the search crawler DB). More importantly, a user can install another browser that doesn't have site protection today (or, if I understand correctly, switch it off). The reason this is an issue is that users like Chrome and are free to use it and tend to find site protection useful (or at least "not a burden to them") and that's not something Google imposed on the industry, it's a consequence of free user choice.



> Too broad. It harms me when Google blocks my malware distribution service because I'm interested in getting malware on your machine; I really want your Bitcoin wallet passwords, you see. ;)

That's okay, a random company failing to protect users from harm is still better than harming an innocent person by accident. They already fail in many cases, obviously we accept a failure rate above 0%. You also skipped over the rest of that paragraph.

> users like Chrome and are free to use it and tend to find site protection useful (or at least "not a burden to them")

That's okay, Google can abide by the proposal I set forth avoiding automated mistaken harms to people. If they want to build this system that can do great harms to people, they need to first and foremost build in safety nets to address those harms they cause, and only then focus on reducing false negatives.


I think there's an unevaluated tension in goals between keeping users safe from malware here and making it easy for new sites to reach people, regardless of whether those sites display patterns consistent with malware distributors.

I don't think we can easily discard the first in favor of the second. Not nearly as categorically as is done here. Those "false negatives" mean users lose things (bank accounts, privacy, access to their computer) through no fault of their own. We should pause and consider that before weeping and rending our garments that yet another hosting provider solution had a bad day.

You've stopped considering monopoly and correctly considered that the real issue is safe browsing, as a feature, is useful to users and disruptive to new business models. But that's independent of Google; that's the nature of sharing a network between actors that want to provide useful services to people and actors that want to cause harm. If I build a browser today, from scratch, that included safe browsing we'd be in the same place and there'd be no Google in the story.


> I think there's an unevaluated tension in goals between keeping users safe from malware here and making it easy for new sites to reach people

To be fair, I evaluated that trade off before replying. It's also not just "new sites", but literally any site or person which could be victimized by "safe browsing".

> Those "false negatives" mean users lose things (bank accounts, privacy, access to their computer) through no fault of their own.

That was already happening, and will continue to happen, no matter what. The only thing that the false negative caused is, a stranger didn't swoop in to save a 2nd stranger from a 3rd stranger. That's ok: superheros are bad government. The government should be the one protecting citizens.


> no matter what

Well, no... That's the thing about false negatives vs. true negatives. The more effective the safe browsing protection is, the fewer false negatives. I think we can agree to disagree on where one should tune the knob between minimizing false negatives and minimizing false positives, especially since

a) you have to be doing something pretty unusual to trigger a false positive (such as "setting up an elaborate mechanism to let user-generated content be hosted off of a subdomain you own")

b) there is a workaround once a publisher is aware of the issue.

> The government should be the one protecting citizens.

This seems to be a claim "Safe browsing should be a government institution." I don't immediately disagree, but we must ask ourselves "Which government do we trust with that responsibility?" In America, that's a near-vertical cliff to scale (and it was even before the current government proved a willingness to weaponize its enforcement capacity against speech that should by rights be protected).

If I don't like Chrome safe browsing protection, I can turn it off or change browsers. What do I do if I don't like my government's safe browsing protection? Is it as opt-out as a corporate-provided one is?


> Well, no... That's the thing about false negatives vs. true negatives. The more effective the safe browsing protection is, the fewer false negatives.

That reiterates what I said: the harms happened before, and will continue happening, no matter what. No action will reduce them to 0.

> a) you have to be doing something pretty unusual to trigger a false positive

I don't think that's true here. Many people have been harmed due to trivial, common actions. Other victims, their charges are secret, and they are not afforded due process, an impartial judge, or even the right to face their accusers. Very tyrannic and kafka-esque. Without transparency into the precise rules and process, we categorically cannot make the above claim, and evidence seems to belie it.

> This seems to be a claim "Safe browsing should be a government institution."... What do I do if I don't like my government's safe browsing protection? Is it as opt-out as a corporate-provided one is?

Good news! It isn't. Who says the government needs to provide safe browsing protection? There are other levers governments can take, like investigating and prosecuting criminals, and making victims whole. "Safe browsing" exists because the government has so far failed at that. Law enforcement is more focused on rounding up & perpetrating violence upon people with different skin color than them, I guess.

All that said, I feel like I articulated a pretty good alternative if google really wants to keep safe browsing going: just provide due process to their victims, which includes: a presumption of innocence (one even weaker than in public policy); the right to face their accusers; the right to a speedy, public trial; the right to defend themselves; and the right to an impartial judge/jury.


> No action will reduce them to 0.

I gave the grace of assuming you weren't making the absurd argument "You can't ever protect against all ills, therefore you shouldn't try." I won't continue in making that error if that was an error.

> Many people have been harmed due to trivial, common actions

[citation needed]. 100% of safe browsing flaggings I am familiar with are "We let users put content on our site without vetting it," or "We hosted binaries without vetting them and one was malware," or "We got owned and didn't know it." I'm sure there are false positives that are truly false, but I'm aware of zero. Google isn't generally in the business of being preemptive about this sort of thing; they tend to add a site to safe browsing warning only after their crawlers have detected an actual threat behavior. Even in the case of immich.cloud, I don't see any evidence that Immich audited 100% of the *.immich.cloud sites against malware or against users using it intentionally to put up a "This is definitely the Bank of America login page" site with Immich de-facto signing off on the legitimacy of that site.

> they are not afforded due process, an impartial judge, or even the right to face their accusers

I would be in favor of improvements to the restoration process, but there are very good reasons to make addition to the safe browsing list fast; sites on the safe browsing list demonstrated an actual harm vector. Being added to the list isn't being found guilty; it's being arrested by a cop on the street on suspicion of guilt. I will concur that Google is under-incentivized to aggressively crawl red-paged sites to see if they are recovered.

> Very tyrannic and kafka-esque

The key difference is that users may stop using Chrome if it bothers them. Since they don't, I think we can make the educated guess that the benefit outweighs the harm for Chrome users.

> Without transparency into the precise rules and process

Non-starter. The process is a cat-and-mouse against hostile actors. They can't inform the public of all the rules without informing the hostile actors at the same time. This is similar to the reason they don't publish an exhaustive list of what gets an ad banned.

> Who says the government needs to provide safe browsing protection?

Perhaps I misunderstood you. Two posts up: "a stranger didn't swoop in to save a 2nd stranger from a 3rd stranger. That's ok: superheros are bad government. The government should be the one protecting citizens." I thought this meant that it was not okay for private companies to be providing safe browsing protection, but it would be okay for a government to do so? Perhaps the superhero metaphor is lost on me.

> There are other levers governments can take, like investigating and prosecuting criminals, and making victims whole. "Safe browsing" exists because the government has so far failed at that.

Here we are in agreement. But I think rounding up and prosecuting criminals and making victims whole in this context would require sweeping and trans-national law changes to e.g. give a grandmother in Idaho her life savings back after operatives in Russia steal it, with at least two governments conducting inter-country data forensics to resolve the question of who the culprit was that may cost an order of magnitude of resources atop said life savings. I'm not holding my breath (especially since one of those countries is currently under sanctions from the other country).

> just provide due process to their victims, which includes: a presumption of innocence

It's a non-starter. The "victims" here are websites with dodgy reputations and dodgier methods of even contacting them to let them know they look sketchy, much less expectation they will respond to that information. Google red-paging the site is the method of contacting them; the only one that works reliably. The rights you claim necessary are necessary to protect against a government, which one cannot choose to not be a citizen of. Not the safe-access policies of a browser that people can choose not to use.

Your fundamental concern is that the current status quo is tilted towards user safety and against new site admins vs. incumbent sites. Yes. This is the correct place for a browser to put the risk-reward weights. Email has already followed the same pattern for very similar reasons.

p.s: FWIW, Immich tried to switch from immich.cloud to immich.build and immediately tripped over another issue: their SSL certificate was signed for immich.cloud, and it can't validate a site with the TLD immich.build. Independent of all other issues here about safe browsing in general, Immich seems to be demonstrating a spooky lack of understanding about architecting web services from the point of view of multiple features paid for in blood to keep end-users safe, and I don't feel a lot of trust of the project (or its parent, Futo) in general at this time.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact