0. I want to wash my car. The car wash is 50 meters away. Should I walk or drive? (mastodon.world)
1513 points · 949 comments · by novemp
A Mastodon user shared a post questioning how an AI would respond to the illogical prompt of whether one should walk or drive to a car wash located only 50 meters away. [src]
The debate centers on whether LLMs possess genuine reasoning or merely follow statistical patterns, as evidenced by models that suggest walking to a car wash because they prioritize distance over the logistical necessity of the vehicle [0][4]. While some argue that users shouldn't have to specify obvious details like the car's location [1][8], others contend that the prompt itself is unnaturally ambiguous and would confuse a human by implying a hidden complication [6][7]. This "edge of intelligence" highlights a disparity between free and paid models, leading to concerns that the widespread use of less capable, "hedging" AI could result in significant real-world misinformation [2][9].
1. 14-year-old Miles Wu folded origami pattern that holds 10k times its own weight (smithsonianmag.com)
926 points · 203 comments · by bookofjoe
We couldn't summarize this story. [src]
While the project highlights a 14-year-old’s work, commenters emphasize that his success stems from six years of dedicated practice and the high neuroplasticity of youth [0][1]. Some users clarify that the student did not invent the "Miura-Ori" fold but rather measured its load-bearing capacity, though there is debate over the true origins of the design [2][9]. Technical skepticism exists regarding the practical application for emergency housing due to paper's vulnerability to lateral loads and weather, though others suggest it could serve as a high-strength core for composite materials [3][5][6].
2. Ministry of Justice orders deletion of the UK's largest court reporting database (legalcheek.com)
522 points · 346 comments · by harel
The Ministry of Justice has ordered the deletion of Courtsdesk, the UK’s largest court reporting database, citing unauthorized data sharing with an AI company. Journalists warn the move undermines open justice, as the platform provided critical access to criminal court listings that the government’s own systems often fail to provide. [src]
The Ministry of Justice's decision has sparked a debate over whether court records should be universally accessible public data or protected to prevent "forever-convictions" in AI datasets [0][1]. While some argue that permanent digital records prevent rehabilitation for minor offenses, others contend that the data should remain public but be legally protected from use in discriminatory decision-making [2][3][7]. Critics of the shutdown suggest the move may be a "cover up" or an overreaction to AI scraping that ultimately cripples journalistic transparency [5][9].
3. What your Bluetooth devices reveal (blog.dmcc.io)
540 points · 194 comments · by ssgodderidge
A developer created Bluehood, a Bluetooth scanning tool, to demonstrate how constantly enabled devices leak sensitive metadata that can be used to track daily routines, identify neighbors, and monitor household patterns without user consent. [src]
Users express concern that the normalization of "always-on" Bluetooth and Wi-Fi allows for pervasive tracking by retailers and passersby, often through persistent identifiers like car model names or unique device IDs [0][7][8]. While some argue that this data is essential for medical device functionality [4], others point out that even more obscure signals, such as Tire Pressure Monitoring Systems (TPMS), broadcast unique, unencrypted IDs that are trivial to track [1]. Despite the existence of more overt tracking methods like license plates and CCTV, there is a call for better MAC randomization to prevent Bluetooth accessories from serving as permanent beacons [5][6].
4. Thanks a lot, AI: Hard drives are sold out for the year, says WD (mashable.com)
376 points · 316 comments · by dClauzel
Western Digital has already sold out its entire 2026 hard drive inventory due to massive demand from AI companies, warning consumers to expect continued hardware shortages and price hikes as enterprise orders now account for 95 percent of the company's revenue. [src]
The current hard drive shortage is attributed to a massive surge in AI-driven demand for storage and compute, though users disagree on whether this represents a sustainable shift in computer usage [2] or a bubble fueled by "irrational money" [0]. Manufacturers remain cautious about expanding production capacity, fearing a repeat of previous market crashes or a post-boom glut similar to the crypto and dot-com eras [1][3][4][6]. Consequently, some consumers worry that high prices and corporate hoarding could eventually make personal hardware ownership prohibitively expensive [5].
5. Qwen3.5: Towards Native Multimodal Agents (qwen.ai)
433 points · 213 comments · by danielhanchen
Alibaba has released **Qwen3.5-397B-A17B**, an open-weight, native multimodal model featuring a hybrid architecture that activates only 17 billion parameters for high efficiency. It supports 201 languages and demonstrates state-of-the-art performance in reasoning, coding, and autonomous agent tasks across text, image, and video modalities. [src]
The Qwen3.5 release has sparked optimism regarding the possibility of running frontier-level models locally on future hardware like the M5 MacBook Pro or AMD Strix Halo devices [0][6][8]. While the model successfully navigates common logic traps that stump other LLMs [1], some users remain skeptical, suggesting the high benchmark performance may result from overfitting or training on outputs from other frontier models [3][9]. There is a specific demand for a vision-enabled model in the 80-110B parameter range to maximize the utility of 128GB RAM systems [2][5].
6. Anthropic tries to hide Claude's AI actions. Devs hate it (theregister.com)
396 points · 240 comments · by beardyw
Developers are criticizing Anthropic for updating its Claude Code tool to hide specific file names during processing, a change intended to reduce terminal noise but which users argue hinders security auditing, error correction, and token management. [src]
Developers are critical of Anthropic's decision to hide Claude's internal actions, arguing that transparency is essential for catching errors before an AI modifies files or breaks a codebase [0][9]. While some suggest this shift reflects a move toward autonomous "agent teams" where only the final output matters, critics contend that current models still require constant human "babysitting" to prevent compounding mistakes [1][6][8]. Despite skepticism regarding AI's ability to handle complex, legacy codebases, some engineers report significant velocity gains by treating agents like junior developers whose work is strictly managed through rigorous PR reviews and testing [2][4].
7. The Israeli spyware firm that accidentally just exposed itself (ahmedeldin.substack.com)
286 points · 335 comments · by 0x54MUR41
Israeli surveillance firm Paragon Solutions accidentally exposed its "Graphite" spyware dashboard on LinkedIn, revealing an interface used to intercept encrypted communications. The leak highlights the "mercenary spyware" industry's reach, involving high-profile acquisitions and contracts with government agencies like U.S. Immigration and Customs Enforcement (ICE). [src]
The discussion highlights Israel’s unique "feedback loop" between military intelligence and private startups, which leverages decades of surveillance data from Palestinian territories to refine technologies like facial recognition [0][4][5]. While some users view this pervasive surveillance as a pragmatic necessity for national security and global counter-terrorism, others argue it creates a "cycle of aggression" or serves as a tool for unilateral leverage [1][2][5]. Amidst concerns over the global reach of these tools, commenters suggest minimizing personal attack surfaces through device hygiene while noting that avoiding such sophisticated tracking is increasingly difficult [3][4][7].
8. Privilege is bad grammar (tadaima.bearblog.dev)
336 points · 280 comments · by surprisetalk
The author argues that "grammar privilege" exists because high-level executives and powerful figures often use sloppy formatting and poor syntax in emails, a luxury not afforded to lower-level employees who must maintain strict professionalism to appear competent. [src]
The discussion frames "bad grammar" and informal dress as forms of countersignalling, where high-status individuals intentionally ignore social norms because their position is already secure [0][5]. While some argue that these behaviors are merely a byproduct of prioritizing comfort over social perception [1][2], others contend that signaling is unavoidable and that observers will always infer status from one's appearance [3][8]. Notable anecdotes highlight how this dynamic plays out in reality, from security guards profiling casually dressed shoppers in luxury stores to the shift toward informal attire in first-class travel [6][9].
9. Show HN: Jemini – Gemini for the Epstein Files (jmail.world)
486 points · 103 comments · by dvrp
Jemini is a specialized AI tool designed to search and analyze Jeffrey Epstein's flight records, emails, court documents, and Amazon purchases. [src]
While some users view an AI-powered search for the Epstein files as a rare "valuable use" of LLM technology [4], others argue it is an "insanely bad idea" that risks generating false accusations regarding sex trafficking due to model hallucinations [1][5]. Concerns were raised about the authenticity of the underlying data, with one user pointing to "fake" or "injected" emails in the database that lack source links and contain confusing "sponsored" tags [3]. Additionally, critics worry the tool will fuel conspiracy theories and infringe on the civil liberties of innocent people mentioned in the documents, while also potentially training future models on sensitive or harmful behaviors [7][9].
Your daily Hacker News summary, brought to you by ALCAZAR. Protect what matters.