• 0 Posts
  • 12 Comments
Joined 3 years ago
cake
Cake day: June 11th, 2023

help-circle
  • I’m not seeing anything that’s not a great look about requiring strong authentication for access to sensitive portions of a users account. What you’re saying is akin to calling it a bad look that they force users to use complex passwords against user wishes.

    I’m not sure what “trust me bro, my cloud is safe” has to do with anything. Passkeys live on your device. There are ways of facilitating device to device migrations of the keys if you want. You don’t need to use them to use passkeys. And at least on Android you don’t need to even use Google to manage the keys.

    Most semiconductors are closed source. The processor, ram, and radio are also more than likely closed. The software interfaces to all of them have open specification and implementation. There’s like, six for Linux. Microsoft open sourced theirs.
    Tpms are not security through obscurity. They are obscure, but that’s not a critical component to their security model.

    What they do isn’t really what “collecting biometrics” implies. They’re storing key points in a hashed fashion that allows similarities to be compared. Even if it wasn’t encrypted in a non-exportable way you still can’t do anything with it beyond checking for a similarity score.

    You’ve done a good job explaining what I said previously: there’s sometimes a disjoint between privacy and security concern, and so sometimes people don’t understand something about security.


  • That’s close enough for a privacy perspective. There’s also limitations on domains that can request the auth, specifically ”only the one the credential is for", and there’s a different key per domain and user typically.
    It’s also implemented in a way where if the user doesn’t choose to disclose their account to the service, the service can’t know.

    Caring about privacy and caring about the details of a security protocol are distinct. You’d be surprised how many people who care about privacy are deeply wary of passkeys because of the biometric factor, which is unfortunate because the way it authenticates is a lot harder to track across domains by design.

    I understood they had a lot of concerns, one of which was biometrics via passkeys since GitHub was a very early adopter due to the supply chain risk they pose.




  • Yeah, the conventional ones still draw a good chunk of power, and they’re not clean but they’re not dirty. Same as how a grocery store isn’t good for the environment but you’re not looking at them first for places to clean.

    They tend to be boring, and are usually not a public thing but just something owned by a company to house their computers. The only reason I know about the ones near me is I used to work at one and people would move jobs to or from other ones. (As an aside, a datacenter is a great place to nap if you like white noise).

    For a sense of scale:

    This is the site of an open AI data center. The yellow square is about 1 square mile and mostly encompasses the area they plan to/have filled.

    That angle shows more build out.

    This photo has two normal data centers in it. The yellow square is also about 1 square mile. I’ve highlighted the data centers in red. One is to the left of the square near the middle, and the other is down from the right side near the big piles of what looks like rocks. (Spoilers: it’s rocks. They make asphalt). The sprawling complex in the upper right is a refrigerated grocery store distribution complex. The middle on the other side of the block from the asphalt is a coal power plant.

    Of the things in this picture, I’m most upset about the giant freeway interchange. Coal is shit, but it’s a modern plant so it’s not belching soot, just co2, and the utility is phasing it out anyway. The grocery traffic is mostly dead except between the hours of midnight and 7am when they do restocks.
    I can hear the freeway if I go outside.


  • I think the part you’re missing is that 1) it’s my community too 2) they’re not talking about AI data centers, or new data centers or anything like that, they’re petitioning to ban all data centers, and 3) we have multiple data centers in the city already that no one complained about until AI data centers became a thing people felt concerned about.

    There’s a major difference between the 2 square mile hyper scale AI data center that requires a nuclear reactor and a full water treatment plant to cool and the 2 acre data center that’s air cooled and has no more ground pollution than any other parking lot and essentially a warehouse.
    The state government has two in the city, at least, for processing electronic tax records, applications and hosting service sites. We have a few national insurance companies that need to process all the things they process. A research university, and a web hosting company round out the list of ones I know about.

    This is my entire point about why sometimes it’s really necessary to point out that what someone is referring to is only a small part of what the words they’re using describe. The language being imprecise doesn’t matter until someone proposes a law outlawing chemicals, shuttering all data centers, or banning AI.

    LLMs are problematic. My fancy rice maker isn’t.


  • I take your point. :)

    It’s worth mentioning in my opinion though, because if someone were to say “we should ban chemicals” it’d be worthwhile to point out what that actually means.

    I don’t actually think the broadness of the category is intentionally abused, it’s just that it’s an incredibly common thing to remove anything from the AI category that’s explicable.

    I feel slightly more hanlons razor about it since there’s people in my city talking about and petitioning on the popular notion of banning all data centers from the state, and how it would be awful if s data center came here. I know what they mean, but it’s not what they’re trying to get the law to do, and our city already has six data centers I know of off the top of my head. The language drift is fine, but when it starts to conflate with policy it’s another issue.





  • Yeah, ocr is a type of AI. The big advantage of modern techniques is that it can factor in context a bit better. It’s the same principle but a different mechanism for how you know a red hexagon with S__P on it says stop, even if the sign is dented, a letter fully fell off, it’s raining and dark.

    It also means it’s sometimes wildly inaccurate, like in cases where it’s just so much more likely that it said something else. Like how on a bright sunny day, with perfect clarity, and a crisp new sign with extra good visuals, you’ll hit the breaks for a sign that’s a red hexagon that says §¥¢¶. It’s just very unlikely that that would coincidentally be on a red hexagon near the road, so it’s more likely you saw wrong and it was actually the normal thing.