0. Claude Code's source code has been leaked via a map file in their NPM registry (twitter.com)
2086 points · 1020 comments · by treexs
The source code for Claude Code was reportedly leaked after a source map file was inadvertently included in its NPM registry package. [src]
The leak, likely caused by a Bun build bug [9], revealed a codebase that many users found surprisingly messy, highlighted by a single 3,167-line function with extreme cyclomatic complexity [5][7]. Key discoveries include a regex-based sentiment analysis tool for logging negative user prompts [0][2] and an "undercover mode" designed to mimic human behavior [1][3]. Additionally, the code contains an "anti-distillation" defense that poisons API traffic with fake tool definitions to prevent competitors from training on Claude’s outputs [4][6].
1. Axios compromised on NPM – Malicious versions drop remote access trojan (stepsecurity.io)
1930 points · 808 comments · by mtud
A compromised maintainer account was used to publish malicious versions of the popular **axios** library (1.14.1 and 0.30.4) to npm, injecting a hidden dependency that deploys a cross-platform remote access trojan (RAT) on Windows, macOS, and Linux systems. [src]
The compromise of Axios has reignited debates over the security of the JavaScript ecosystem, with users highlighting that the attack relied on a malicious `postinstall` script in a fake dependency [4]. To mitigate such risks, many recommend configuring package managers to ignore scripts and enforce a "minimum release age" for updates, though critics note this may simply delay the activation of dormant malware [0][9]. There is a strong consensus favoring "batteries included" standard libraries or single-file C libraries to reduce the massive attack surface created by transitive dependencies [1][3][8].
2. The Claude Code Source Leak: fake tools, frustration regexes, undercover mode (alex000kim.com)
1369 points · 572 comments · by alex000kim
The source code for Claude Code was leaked via a map file in its NPM registry, revealing internal details such as "undercover mode," regexes for handling user frustration, and placeholder tools. [src]
The leak of Claude Code's internal prompts has sparked a debate over "undercover mode," which instructs the AI to omit mentions of its identity and write commit messages "as a human developer would" [0][6]. While some users view this as a deceptive attempt to bypass anti-AI sentiment or legal concerns regarding copyright and accountability, others argue it is a practical measure to keep git histories clean of "Bill of Tools" noise [1][5][7][8]. Additionally, the leak revealed that Anthropic developers are using detailed code comments to store operational data and business context, a practice described as both a "hack" for guiding AI agents and a "YOLO" approach that inadvertently exposes trade secrets [2][3].
3. Why the US Navy won't blast the Iranians and 'open' Strait of Hormuz (responsiblestatecraft.org)
465 points · 1443 comments · by KoftaBob
The U.S. Navy is avoiding a direct confrontation to reopen the Strait of Hormuz because Iran’s inexpensive anti-ship missiles and drones pose an asymmetric, high-risk threat to costly American aircraft carriers, signaling a shift away from traditional Western naval dominance near well-defended shorelines. [src]
The discussion centers on whether the U.S. Navy remains capable of securing the Strait of Hormuz, with some arguing that aircraft carriers have become expensive liabilities vulnerable to low-cost drones and missiles [0][5][6]. While some commenters believe the U.S. has lost the industrial scale to compete with adversaries like China [1], others contend that carriers remain powerful assets for air superiority and that current operations demonstrate their continued relevance [5][8]. A significant portion of the debate focuses on the grim reality of a potential conflict, comparing it to the "no man's land" of trench warfare or historical mass-destruction strategies used to collapse economies [2][9].
4. Oracle slashes 30k jobs (rollingout.com)
914 points · 846 comments · by pje
Oracle has laid off an estimated 20,000 to 30,000 employees via early morning emails to cut costs and fund a massive $58 billion debt-heavy expansion into artificial intelligence infrastructure. [src]
While Oracle's database was once the industry leader for high availability and scalability [2], commenters now question its modern value proposition given the rise of free, competitive alternatives like Postgres [0][2]. The massive layoffs are viewed by some as a correction for aggressive pandemic-era hiring [3], while others attribute the cuts to over-investment in AI products that have yet to yield returns [1]. Beyond technical merits, the discussion highlights Oracle's deep entrenchment in government and classified sectors [4], as well as the cold, "terror-like" emotional impact of corporate layoff procedures [7][9].
5. Artemis II is not safe to fly (idlewords.com)
903 points · 637 comments · by idlewords
NASA is facing criticism for proceeding with the crewed Artemis II mission despite significant heat shield damage, including material "spalling" and melted bolts, observed during the uncrewed Artemis I flight, raising concerns that schedule and budget pressures are compromising astronaut safety. [src]
Critics argue that the Artemis II heat shield issues mirror the "broken safety culture" and "success-oriented planning" that led to the Challenger and Columbia disasters, where unexpected hardware behavior was eventually normalized as an acceptable risk [0][1][4]. However, others contend this comparison is unfair, noting that NASA has analyzed the current problem deeply rather than ignoring it, and that both engineers and astronauts currently believe the mission is safe [2]. The debate also touches on whether manned space exploration should be viewed as a high-risk endeavor akin to extreme sports, where some level of tragedy is an acceptable trade-off for progress [3][7].
6. OpenAI closes funding round at an $852B valuation (cnbc.com)
529 points · 494 comments · by surprisetalk
OpenAI has finalized a new funding round that values the artificial intelligence company at $852 billion. [src]
The reported $852B valuation and $122B funding round have drawn skepticism, with commenters noting that much of the capital is contingent on future milestones and may be a "reality-distortion field" intended to signal market dominance [0][4][6]. While OpenAI's revenue growth is significant, critics argue that focusing on revenue ignores massive projected compute costs—potentially $150 billion annually—and the lack of clear profitability [1][3]. Furthermore, there is a debate over whether AI is a truly transformative "electricity" moment or a "VR moment" where the actual utility of AI agents is being overestimated by investors who have few other attractive places to park capital [8][9].
7. Ollama is now powered by MLX on Apple Silicon in preview (ollama.com)
646 points · 355 comments · by redundantly
Ollama has integrated Apple’s MLX framework to significantly accelerate AI model performance on Apple Silicon, introducing NVFP4 quantization support and improved caching for faster, more memory-efficient coding and agentic tasks on macOS. [src]
The consensus among many users is that on-device LLMs represent the future of computing due to improved privacy, reduced latency, and the elimination of subscription costs [0][1]. However, skeptics argue that users generally prioritize convenience over privacy and that local models may never match the efficiency or "frontier" intelligence of massive cloud-based data centers [2][5][6]. Notable anecdotes include developers using local models for bash scripts [3] and experimenting with "uncensored" models that bypass the strict guardrails found in corporate or state-influenced AI [4]. There are also concerns that the current era of high-quality open-weight models is a temporary "bubble" driven by corporate competition and venture capital that may eventually shift toward paid or closed-source models [7][9].
8. GitHub backs down, kills Copilot pull-request ads after backlash (theregister.com)
609 points · 368 comments · by _____k
GitHub has disabled a feature that allowed Copilot to inject promotional "tips" into human-authored pull requests following developer backlash over the AI's unauthorized edits. GitHub executives admitted the behavior was a "wrong judgment call" and clarified that such tips will no longer appear in those contexts. [src]
The community reacted with sharp criticism toward GitHub’s attempt to rebrand advertisements as "product tips," viewing it as a waste of top-tier engineering talent and a sign of Microsoft’s "marketing-driven" influence [0][8]. Many users expressed a sense of betrayal, arguing that Microsoft is ruining GitHub's dominance by prioritizing monetization over user experience, which has prompted discussions about migrating to alternatives like GitLab [1][2][5]. While some debate whether the "best minds" are truly being wasted on ads or simply finding ways to fund free technology, there is a strong consensus that the platform's moral and product direction has declined since the acquisition [3][7][9].
9. Microsoft: Copilot is for entertainment purposes only (microsoft.com)
598 points · 208 comments · by lpcvoid
Microsoft's updated terms of use state that Copilot is for entertainment purposes only, warning users that the AI can make mistakes and should not be relied upon for important advice. [src]
Commenters express frustration with "legalese" that allows companies to disclaim liability for tools marketed as professional, with some arguing that obtuse contracts should be automatically invalid [0][4]. There is a notable focus on the absurdity of Anthropic's "Pro" plan prohibiting commercial use in Europe, a restriction verified by users through VPN testing [1][7]. While some view these disclaimers as standard software boilerplate [5][6], others warn that such clauses ensure human employees remain the sole point of accountability when AI systems fail [8].
Brought to you by ALCAZAR. Protect what matters.