Google Is Allowing Device Fingerprinting
Lukasz Olejnik writes about device fingerprinting, and why Google’s policy change to allow it in 2025 is a major privacy setback.
EDITED TO ADD (1/12): Shashdot thread.
Page 1 of 140
Lukasz Olejnik writes about device fingerprinting, and why Google’s policy change to allow it in 2025 is a major privacy setback.
EDITED TO ADD (1/12): Shashdot thread.
Interesting analysis:
We introduce and explore a little-known threat to digital equality and freedomwebsites geoblocking users in response to political risks from sanctions. U.S. policy prioritizes internet freedom and access to information in repressive regimes. Clarifying distinctions between free and paid websites, allowing trunk cables to repressive states, enforcing transparency in geoblocking, and removing ambiguity about sanctions compliance are concrete steps the U.S. can take to ensure it does not undermine its own aims.
The paper: “Digital Discrimination of Users in Sanctioned States: The Case of the Cuba Embargo”:
Abstract: We present one of the first in-depth and systematic end-user centered investigations into the effects of sanctions on geoblocking, specifically in the case of Cuba. We conduct network measurements on the Tranco Top 10K domains and complement our findings with a small-scale user study with a questionnaire. We identify 546 domains subject to geoblocking across all layers of the network stack, ranging from DNS failures to HTTP(S) response pages with a variety of status codes. Through this work, we discover a lack of user-facing transparency; we find 88% of geoblocked domains do not serve informative notice of why they are blocked. Further, we highlight a lack of measurement-level transparency, even among HTTP(S) blockpage responses. Notably, we identify 32 instances of blockpage responses served with 200 OK status codes, despite not returning the requested content. Finally, we note the inefficacy of current improvement strategies and make recommendations to both service providers and policymakers to reduce Internet fragmentation.
This feels important:
The Secret Service has used a technology called Locate X which uses location data harvested from ordinary apps installed on phones. Because users agreed to an opaque terms of service page, the Secret Service believes it doesn’t need a warrant.
Interesting analysis:
Although much attention is given to sophisticated, zero-click spyware developed by companies like Israel’s NSO Group, the Italian spyware marketplace has been able to operate relatively under the radar by specializing in cheaper tools. According to an Italian Ministry of Justice document, as of December 2022 law enforcement in the country could rent spyware for €150 a day, regardless of which vendor they used, and without the large acquisition costs which would normally be prohibitive.
As a result, thousands of spyware operations have been carried out by Italian authorities in recent years, according to a report from Riccardo Coluccini, a respected Italian journalist who specializes in covering spyware and hacking.
Italian spyware is cheaper and easier to use, which makes it more widely used. And Italian companies have been in this market for a long time.
DeFlock is a crowd-sourced project to map license plate scanners.
It only records the fixed scanners, of course. The mobile scanners on cars are not mapped.
The Open Source Initiative has published (news article here) its definition of “open source AI,” and it’s terrible. It allows for secret training data and mechanisms. It allows for development to be done in secret. Since for a neural network, the training data is the source code—it’s how the model gets programmed—the definition makes no sense.
And it’s confusing; most “open source” AI models—like LLAMA—are open source in name only. But the OSI seems to have been co-opted by industry players that want both corporate secrecy and the “open source” label. (Here’s one rebuttal to the definition.)
This is worth fighting for. We need a public AI option, and open source—real open source—is a necessary component of that.
But while open source should mean open source, there are some partially open models that need some sort of definition. There is a big research field of privacy-preserving, federated methods of ML model training and I think that is a good thing. And OSI has a point here:
Why do you allow the exclusion of some training data?
Because we want Open Source AI to exist also in fields where data cannot be legally shared, for example medical AI. Laws that permit training on data often limit the resharing of that same data to protect copyright or other interests. Privacy rules also give a person the rightful ability to control their most sensitive information like decisions about their health. Similarly, much of the world’s Indigenous knowledge is protected through mechanisms that are not compatible with later-developed frameworks for rights exclusivity and sharing.
How about we call this “open weights” and not open source?
An advocacy groups is filing a Fourth Amendment challenge against automatic license plate readers.
“The City of Norfolk, Virginia, has installed a network of cameras that make it functionally impossible for people to drive anywhere without having their movements tracked, photographed, and stored in an AI-assisted database that enables the warrantless surveillance of their every move. This civil rights lawsuit seeks to end this dragnet surveillance program,” the lawsuit notes. “In Norfolk, no one can escape the government’s 172 unblinking eyes,” it continues, referring to the 172 Flock cameras currently operational in Norfolk. The Fourth Amendment protects against unreasonable searches and seizures and has been ruled in many cases to protect against warrantless government surveillance, and the lawsuit specifically says Norfolk’s installation violates that.”
An Australian news agency is reporting that robot vacuum cleaners from the Chinese company Deebot are surreptitiously taking photos and recording audio, and sending that data back to the vendor to train their AIs.
Ecovacs’s privacy policy—available elsewhere in the app—allows for blanket collection of user data for research purposes, including:
- The 2D or 3D map of the user’s house generated by the device
- Voice recordings from the device’s microphone
- Photos or videos recorded by the device’s camera
It also states that voice recordings, videos and photos that are deleted via the app may continue to be held and used by Ecovacs.
No word on whether the recorded audio is being used to train the vacuum in some way, or whether it is being used to train a LLM.
Slashdot thread.
Ars Technica has a good article on what’s happening in the world of television surveillance. More than even I realized.
This site will let you take a selfie with a New York City traffic surveillance camera.
EDITED TO ADD: BoingBoing post.
Sidebar photo of Bruce Schneier by Joe MacInnis.