0. I verified my LinkedIn identity. Here's what I handed over (thelocalstack.eu)
1354 points · 462 comments · by ColinWright
LinkedIn identity verification requires users to share extensive biometric and personal data with Persona, a third-party U.S. company that uses the information for AI training and shares it with 17 subprocessors, potentially exposing European users to U.S. surveillance under the CLOUD Act. [src]
The discussion highlights deep skepticism regarding LinkedIn's identity verification process, with users citing historical privacy breaches [2][6] and the "parasitical" nature of data-driven business models [7]. While a Persona representative and industry insiders clarify that data is often deleted quickly and not shared with every listed subprocessor [1][5], others remain "deeply uncomfortable" with the requirement to provide biometric data for basic account access [4]. A significant portion of the debate centers on geopolitical tensions, with some defending the dominance of American tech infrastructure [0] while others argue that the US has actively engineered European digital dependency [3][9].
1. Claws are now a new layer on top of LLM agents (twitter.com)
351 points · 795 comments · by Cyphase
Andrej Karpathy describes "Claws" as a powerful new orchestration layer for AI agents while warning of significant security risks in large, unvetted implementations like OpenClaw. [src]
The discussion defines "claws" as persistent, asynchronous LLM agents that run on a schedule (like "cron-for-agents") with broad permissions to access credentials, email, and the web [0][3]. While some users remain skeptical of their utility or see them as "vanity AI" [5][7], others envision practical applications such as automated media archiving [6].
A significant portion of the debate focuses on the rapid shift from fearing "Skynet" to granting AI autonomous internet access [1][8]. Critics argue that security concerns are often "overdone" by bureaucratic "policy people" [2], while proponents of safety suggest technical guardrails, such as requiring one-time passwords (OTPs) before an agent can execute high-risk actions [4].
2. Why is Claude an Electron app? (dbreunig.com)
392 points · 402 comments · by dbreunig
Despite the rise of AI coding agents, Anthropic continues to use the Electron framework for its desktop app because agents still struggle with the "last mile" of development, maintenance, and cross-platform support required for native applications. [src]
Anthropic engineers chose Electron to leverage their team's prior expertise and ensure feature parity across web and desktop platforms, though they acknowledge this involves performance tradeoffs [0]. Critics argue that a multi-billion dollar company should prioritize native performance over "dumpy" UX and bloated dependencies [6][7], noting the irony that AI tools—which claim to make porting code effortless—are not being used to move away from JavaScript [9]. Meanwhile, some users defend the choice as a pragmatic business decision, dismissing complaints about RAM usage as "HN-sniping" [3][8].
3. What not to write on your security clearance form (1988) (milk.com)
465 points · 207 comments · by wizardforhire
After a 12-year-old’s homemade code key triggered a massive FBI spy investigation in 1943, a security officer later forced him to omit the incident from his clearance application to avoid a permanent bureaucratic rejection. [src]
The security clearance process is often viewed as a "game" where applicants are frequently encouraged—sometimes even by security officers themselves—to omit or lie about past indiscretions to fit into rigid bureaucratic categories [0][4][6]. While the system is intended to identify blackmail risks like debt or substance abuse, critics argue that forcing applicants to lie actually creates new vulnerabilities for extortion [5]. There is a notable disparity in enforcement, where functional alcoholism is often overlooked while even minor, historical drug use can lead to immediate disqualification [1][9].
4. CXMT has been offering DDR4 chips at about half the prevailing market rate (koreaherald.com)
225 points · 227 comments · by phront
Chinese chipmaker CXMT is challenging Samsung and SK hynix by offering legacy DDR4 DRAM at half the market rate, leveraging state subsidies to gain market share while simultaneously expanding into high-end HBM3 production to compete in the AI memory sector. [src]
The entry of Chinese firms like CXMT into the DRAM market is seen as a strategic payoff for decades of state subsidies, potentially allowing them to dominate the sector through aggressive pricing while Western incumbents focus on high-margin AI chips [0][2]. While some argue that low prices are a win for consumers and that domestic industries simply shift toward higher-value niches [1][5][7], others warn that this leads to a dangerous loss of production capacity and innovation [3]. Critics emphasize that becoming dependent on a single, potentially unfriendly supplier creates significant geopolitical risks, as specialized domestic facilities cannot easily replace mass-market capacity during a trade war or conflict [6][9].
5. EU mandates replaceable batteries by 2027 (2023) (environment.ec.europa.eu)
228 points · 178 comments · by cyrusmg
The EU's new Batteries Regulation mandates that by 2027, consumers must be able to remove and replace portable batteries in electronic products. The law also introduces strict carbon footprint limits, recycling targets, and due diligence requirements to ensure a sustainable and circular battery life cycle. [src]
Proponents of the EU mandate argue it is a necessary step against planned obsolescence and the "mountain-sized piles" of toxic e-waste generated by non-removable designs [0]. Critics, however, contend that modern batteries are more durable than their predecessors and that integrated designs allow for sleeker, water-resistant devices with better component density [1][4][5]. While some users value the convenience of carrying spare batteries to eliminate charging downtime [8], others argue that the useful life of a device is often limited by CPU and RAM obsolescence rather than battery degradation [5].
6. How Taalas “prints” LLM onto a chip? (anuragk.com)
270 points · 132 comments · by beAroundHere
Startup Taalas has developed a fixed-function ASIC chip that "hardwires" Llama 3.1 8B weights directly into silicon, achieving 17,000 tokens per second by eliminating the memory bandwidth bottleneck found in traditional GPU-based inference systems. [src]
The transition from software-based LLM inference to dedicated hardware is viewed by some as an inevitable evolution similar to the history of GPUs [0][8]. While technical analysis suggests that packing billions of coefficients into transistors is highly feasible through quantization and block compression [7], critics argue that the rapid pace of AI development makes non-rewritable chips impractical for models that become outdated in weeks [4][6]. Despite these concerns, there is significant interest in the potential for ultra-efficient, "plug-and-play" inference ASICs in form factors like USB-C drives or integrated "AI cores" within consumer electronics [1][2][3][5][9].
7. Personal Statement of a CIA Analyst (antipolygraph.org)
226 points · 156 comments · by grubbs
A former CIA analyst details her career-long struggle with the polygraph, describing how the test's unreliability and the aggressive tactics of examiners led her to eventually refuse further testing and resign from her position as a defense contractor. [src]
The discussion centers on the CIA's continued use of polygraph tests, which many users dismiss as pseudoscience akin to the agency's historical interest in telekinesis [0][2]. While critics argue the tests lack scientific validity, others contend they are effective "pressure cookers" designed to intimidate candidates into confessions through the fear of detection [4][9]. Commenters also debated the ethics and practicality of the agency's rigid hiring standards, ranging from amusement at the idea of an analyst's emotional vulnerability to defense of disqualifying candidates for petty theft [3][6][7].
8. Show HN: Llama 3.1 70B on a single RTX 3090 via NVMe-to-GPU bypassing the CPU (github.com)
300 points · 80 comments · by xaskasdf
NTransformer is a high-efficiency C++/CUDA inference engine that enables running Llama 3.1 70B on a single 24GB RTX 3090 by bypassing the CPU to stream model layers directly from NVMe storage to the GPU. [src]
This project demonstrates running a 70B model on a single GPU by bypassing the CPU to load data directly from NVMe, though users note the resulting speed of 0.2 tokens per second is too slow for interactive use [0][1]. While some argue that a well-quantized smaller model or standard CPU/RAM inference might offer better latency-quality tradeoffs [0][1][2], others suggest this approach is ideal for cost-effective, non-interactive batch workloads like automated content pipelines [6]. The discussion also explores theoretical optimizations, such as using multi-tier Mixture of Experts (MoE) to balance weights between VRAM, RAM, and NVMe, or utilizing technologies like GPUDirect and DirectX APIs to further streamline data transfers [3][5][7][9].
9. Acme Weather (acmeweather.com)
234 points · 141 comments · by cryptoz
Acme Weather: Title: Acme Weather
URL Source: https://acmeweather [src]
The discussion reflects significant skepticism regarding Acme Weather’s $25/year subscription model and its utility in a market saturated with free, high-quality alternatives [1][2][6][9]. Many users expressed frustration over the app's US-only availability and the lack of specific features like "feels like" forecasts or historical data for planning [0][3][4]. While some appreciate the team's "depth of thought" and expertise [8], others argue that local government-funded apps often provide superior data and features for free [7].
Your daily Hacker News summary, brought to you by ALCAZAR. Protect what matters.