Top HN · Sat, Feb 28, 2026

Summaries are generated daily at 06:00 UTC


0. The United States and Israel have launched a major attack on Iran (cnn.com)

1092 points · 2358 comments · by lavp

The United States and Israel launched a joint military assault on Iran that killed Supreme Leader Ayatollah Ali Khamenei, prompting a massive wave of retaliatory Iranian strikes across the Middle East targeting Israel and several countries hosting U.S. military bases. [src]

Commenters expressed deep skepticism regarding the strategic goals of the attack, with some arguing that Iran poses no existential threat to Israel and that the U.S. is initiating conflicts without clear ideological or practical justifications [3][6]. A recurring consensus is that these escalations signal to the world that nuclear weapons are the only reliable path to national security, as diplomatic "deals" with the U.S. are increasingly viewed as untrustworthy [1][4][8][9]. While some hope for a swift resolution and regime change across all involved nations, others fear this represents a flashpoint in a modern, fragmented World War III that marks the end of decades of global stability [2][4][7].

1. We Will Not Be Divided (notdivided.org)

2550 points · 807 comments · by BloondAndDoom

Over 700 Google and OpenAI employees signed an open letter urging their leadership to reject Pentagon demands to use AI models for domestic mass surveillance and autonomous warfare, following reports that the Department of War threatened to invoke the Defense Production Act against Anthropic over similar ethical "red lines." [src]

The government's decision to label Anthropic a supply chain risk is viewed by some as a dangerous weaponization of procurement rules to punish companies for perceived disloyalty [0][5]. While some argue the government is acting rationally by avoiding suppliers that restrict how their products are used [2], others contend that strong-arming elite scientists stifles innovation and forces political compliance [1][7]. Amidst reports of OpenAI agreeing to work with the Department of War [3], some commenters suggest that open-sourcing all AI research is the only way to prevent general intelligence from being gatekept by Machiavellian institutions [4].

2. OpenAI – How to delete your account (help.openai.com)

1838 points · 345 comments · by carlosrg

Users can permanently delete their OpenAI account through the company's Privacy Portal or directly within ChatGPT settings, a process that also cancels active subscriptions and allows for re-registration with the same email address after 30 days. [src]

The discussion centers on a growing distrust of OpenAI, with critics citing Sam Altman’s pivot toward "engagement-optimization" and the departure of founding scientists as reasons to boycott the platform [0][8]. While some users are migrating to Anthropic for its perceived scientific integrity and superior developer tools, others argue that all major AI providers involve moral compromises or face similar ethical risks [1][3][7][8]. Skeptics question the efficacy of deleting accounts in the face of inevitable mass surveillance, suggesting that government regulation is more vital than individual boycotts [2][3].

3. OpenAI agrees with Dept. of War to deploy models in their classified network (twitter.com)

1351 points · 626 comments · by eoskx

The provided text contains no information regarding an agreement between OpenAI and the Department of War, as the link failed to load and only displays a technical error message regarding JavaScript and browser compatibility. [src]

The agreement has sparked intense debate over whether OpenAI is compromising ethical "red lines" regarding autonomous weapons and mass surveillance that previously led the Department of Defense to label Anthropic a supply chain risk [0][2]. While an OpenAI employee argues the deal includes explicit prohibitions on these uses [1][3], critics suggest the primary difference is that OpenAI will defer to the government’s interpretation of "lawful use" rather than reserving the right to judge violations themselves [6][8]. Some observers attribute the staff's continued employment to high compensation levels [4][9], while others have begun canceling their subscriptions in protest, favoring Anthropic’s more rigid alignment stance [5][6].

4. Statement on the comments from Secretary of War Pete Hegseth (anthropic.com)

1133 points · 349 comments · by surprisetalk

Anthropic has vowed to legally challenge the Department of War after Secretary Pete Hegseth moved to designate the company a supply chain risk following a dispute over Anthropic's refusal to allow its AI to be used for mass domestic surveillance or fully autonomous weapons. [src]

Anthropic’s refusal to comply with Department of War demands is viewed by some as a rare, principled stand where a company is willing to walk away from significant revenue [0][2]. While former employees and supporters argue the decision is driven by genuine values and a desire for a safe AI transition, skeptics suggest the move may be a calculated effort to maintain employee retention and consumer goodwill [0][2][4]. The discussion also highlights the potential for collective action among tech firms to resist government overreach and notes the recent linguistic shift toward using the term "warfighters" to describe service members [1][6][7].

5. How do I cancel my ChatGPT subscription? (help.openai.com)

1024 points · 239 comments · by tobr

Users can cancel ChatGPT subscriptions through the account settings on the website, via mobile app stores, or by deleting their account at least 24 hours before the next billing date. [src]

The discussion surrounding canceling ChatGPT subscriptions highlights a growing shift toward local LLMs, with users recommending high-memory Macs as the most consumer-friendly hardware for running capable models like Qwen [0]. While some argue that hardware costs for non-Mac users remain prohibitively high compared to a subscription [2], others suggest that the "laziness" of GPT and poor customer support—which reportedly requires navigating a hallucinating chatbot to resolve billing disputes—justify the switch [8]. Ethical concerns also feature prominently, ranging from Sam Altman’s perceived lack of principles regarding military involvement to the subjective nature of "doing the right thing" in defense tech [1][3][7]. Before deleting accounts, users are advised to export their chat history, though some question the long-term value of keeping those logs [4][5].

6. The whole thing was a scam (garymarcus.substack.com)

703 points · 214 comments · by guilamu

Gary Marcus alleges that Sam Altman secretly negotiated a deal to take over Anthropic’s business while publicly supporting CEO Dario Amodei, suggesting the government’s punitive actions against Anthropic were influenced by OpenAI’s political donations rather than fair market competition. [src]

The discussion centers on the perceived normalization of "outright bribery" and pay-to-play politics in the US, with users arguing that the rule of law is degrading into a system where billionaires openly buy government influence [0][3][4]. Commenters highlight Sam Altman’s $25 million donation as a "speedrun" from altruism to corruption, though some argue the relatively low price tag suggests the political system is surprisingly "cheap" to influence [1][8][9]. While some claim these revelations are a shock to the community, others contend that the "corrupt US regime" and "late-stage capitalism" have long been frequent topics of cynical debate on the platform [5][7].

7. Cognitive Debt: When Velocity Exceeds Comprehension (rockoder.com)

458 points · 203 comments · by pagade

AI-assisted development creates "cognitive debt" by allowing engineers to generate code faster than they can comprehend it, leading to invisible deficits in tacit knowledge, increased long-term maintenance risks, and organizational reliance on metrics that prioritize output velocity over deep system understanding. [src]

The rise of AI coding agents is accelerating "cognitive debt," where developers struggle to maintain a mental model of codebases that grow faster than they can be comprehended [1][8]. While some argue that losing track of code details is a perennial issue predating AI [0][4], others contend that the probabilistic nature of LLMs makes this abstraction more dangerous than the shift to high-level languages [6]. To mitigate this, teams are experimenting with "agent plans" and prompt logs to capture the tacit knowledge and intent that is often lost during AI-driven development [1][3][9].

8. Croatia declared free of landmines after 31 years (glashrvatske.hrt.hr)

487 points · 121 comments · by toomuchtodo

Croatia has officially been declared free of landmines 31 years after the Homeland War, following a €1.2 billion demining effort that removed over 500,000 explosive devices. [src]

While Croatia’s declaration is a major milestone, locals remain skeptical that the country is truly 100% clear due to the difficult geography and the persistent nature of unmapped frontlines [1][9]. Commenters condemn landmines as uniquely "vile" weapons that endanger civilians decades after conflicts end, noting that demining is a high-risk profession with significant casualty rates [0][6]. The discussion also highlights the long-term global scale of the issue, with concerns raised about the decades of demining ahead for Ukraine and the ongoing presence of explosives in places like Bosnia, Vietnam, and Australia [0][5][7][9].

9. We do not think Anthropic should be designated as a supply chain risk (twitter.com)

424 points · 184 comments · by golfer

OpenAI has formally advised the Department of War that it opposes designating its competitor Anthropic as a supply chain risk. [src]

The discussion centers on the perceived disparity between Anthropic’s and OpenAI’s agreements with the Department of Defense, with users arguing that OpenAI’s "more stringent" safeguards are actually hollow legalisms that grant the government carte blanche [0][1][5]. Commentators suggest Anthropic was blacklisted specifically because they attempted to enforce ethical redlines through technology rather than mere contractual promises [7][9]. While some see OpenAI’s public statements as "damage control" for a tarnished brand, others argue both companies' ethical stances are flawed for focusing primarily on domestic rather than international protections [2][3][6].


Your daily Hacker News summary, brought to you by ALCAZAR. Protect what matters.