0. Microsoft and OpenAI end their exclusive and revenue-sharing deal (bloomberg.com)
827 points · 707 comments · by helsinkiandrew
Microsoft and OpenAI have ended their exclusive revenue-sharing agreement, transitioning to a non-exclusive partnership that allows both companies to collaborate with other industry players. [src]
The termination of the exclusive deal is seen as a move to prevent OpenAI from being "kneecapped" by Microsoft’s limitations, potentially allowing OpenAI to utilize Google’s superior TPU hardware [1][3]. While some argue that current AI models are merely "random token generators" lacking a true moat or thought process [2][7], others contend that the rapid progress in latent space encoding and robotics suggests we are witnessing the emergence of a new kind of intelligence [4][8][9]. Skepticism remains high regarding the industry's shifting definitions of AGI, with critics labeling the term a marketing narrative rather than a scientific reality [0][6].
1. GitHub Copilot is moving to usage-based billing (github.blog)
607 points · 447 comments · by frizlab
Starting June 1, 2026, GitHub Copilot will transition to usage-based billing, replacing premium request units with monthly allotments of GitHub AI Credits while keeping base plan prices unchanged. [src]
The shift to usage-based billing marks the end of "subsidized inference," a ZIRP-era strategy where Microsoft burned capital to gain market stickiness [0][1]. Users are particularly alarmed by massive multiplier increases, such as Claude Opus jumping from 3x to 27x, which effectively ends the ability to consume hundreds of dollars in tokens for a flat $10 monthly fee [6][8]. Many commenters now see little incentive to stay with GitHub Copilot, arguing that pay-as-you-go providers like OpenRouter or cheaper models like DeepSeek offer better value without forcing a monthly minimum spend [2][4][5][9]. Despite these price hikes, some believe costs will eventually stabilize as open-source models improve and diminishing returns on model size make "good enough" inference a commodity [7].
2. Men who stare at walls (alexselimov.com)
513 points · 227 comments · by aselimov3
To combat information overload and brain fog, Alex Selimov suggests a routine of staring at a wall for five to ten minutes to recover focus and reset the mind during periods of low productivity. [src]
Commenters largely agree that "staring at a wall" is a form of meditation, specifically mirroring the Soto Zen tradition of sitting for long periods to return the mind to the present [0][2][4]. While some view it as a necessary recovery of "disattention" or downtime stolen by smartphones [1], others debate whether it should be used as a productivity hack or if simply taking a walk would be more effective for burnout [7][9]. Experienced practitioners emphasize that true meditation requires intense willpower to monitor internal monologues, though even "inventing" the practice independently can provide significant benefits like increased patience and reduced fear [2][4][5].
3. Is my blue your blue? (ismy.blue)
436 points · 299 comments · by theogravity
This interactive test allows users to determine their personal threshold for categorizing shades as either blue or green to see how their color perception compares to others. [src]
Users expressed frustration with the test's binary choice, arguing that forcing a "blue" or "green" label on colors like cyan or turquoise is as nonsensical as asking if a middle-latitude city is in Canada or Mexico [0][1][9]. While some argue the forced choice is necessary to pinpoint a specific boundary on the color spectrum [4][7], others found the results illuminating, with one user discovering their personal boundary was greener than 95% of the population [3][8]. The thread also touches on the classic philosophical question of whether individuals experience the same internal qualia for colors, regardless of the labels they are taught [6].
4. 4TB of voice samples just stolen from 40k AI contractors at Mercor (app.oravys.com)
494 points · 176 comments · by Oravys
The extortion group Lapsus$ reportedly stole four terabytes of data from Mercor, exposing the voice samples and government IDs of 40,000 AI contractors to potential identity theft and sophisticated voice-cloning attacks. [src]
The breach highlights the irreversible nature of biometric data theft, as victims cannot "rotate" their voices like passwords once they are leaked [2][4]. Commenters noted the irony of a security firm offering to analyze stolen samples by requesting even more voice data, while criticizing how "explicit consent" is often buried in terms of service for workers needing a paycheck [0][2][5]. The discussion emphasizes the German concept of *Datensparsamkeit* (data frugality), lamenting that the AI era has replaced data liability concerns with an insatiable drive to collect all possible information [1][3][6].
5. Pgbackrest is no longer being maintained (github.com)
408 points · 218 comments · by c0l0
The lead developer of pgBackRest has announced the project is no longer being maintained due to a lack of corporate sponsorship and the need to pursue other employment. [src]
The sudden end of pgBackRest maintenance highlights the fragility of critical open-source infrastructure that relies on corporate sponsorship, which can vanish following mergers and acquisitions [3]. While users expressed deep sadness and concern for their production databases, critics pointed out that few users contributed back or were willing to pay for the value they received [0][2][7]. The discussion reflects a broader debate on the need for sustainable funding models, such as tiered pricing based on company revenue, to prevent maintainer burnout and project abandonment [4][6][7].
6. China blocks Meta's acquisition of AI startup Manus (cnbc.com)
351 points · 244 comments · by yakkomajuri
China has blocked Meta’s attempted acquisition of the AI startup Manus, marking a significant intervention by Chinese regulators into a foreign purchase of a domestic artificial intelligence firm. [src]
The discussion centers on China's intervention in Meta's acquisition of Manus, specifically the "sinister" detention of the startup's founders to force an annulment of the deal [0][2][8]. Commentators debate whether this is a unique act of state-sponsored hostage-taking or a standard geopolitical "playbook" used by empires to prevent "Singapore-washing" and the loss of domestic talent [3][4][7]. While some argue the U.S. uses similar economic and military coercion, others contend that holding citizens without criminal charges to unwind foreign business transactions is a distinct escalation by the CCP [4][8][9].
7. “Why not just use Lean?” (lawrencecpaulson.github.io)
270 points · 188 comments · by ibobev
Computer scientist Lawrence Paulson argues against the perceived dominance of Lean in formal mathematics, highlighting the historical successes of systems like AUTOMATH and Isabelle while advocating for Isabelle’s superior automation, legibility, and avoidance of the complexities associated with dependent types. [src]
The discussion centers on Lean's pragmatic adoption of classical logic via the Mathlib library, which facilitates complex mathematical proofs by allowing the law of the excluded middle and double negation elimination [0][1]. While some users argue that constructive (intuitionistic) logic is more natural for programming because its proofs correspond directly to data structures [3][6], others contend that classical logic remains the standard for proving algorithm correctness [7][8]. Despite criticisms that Lean may be less "elegant" or powerful in specific areas compared to Agda or Coq, it is praised for its versatility and large community [2][5].
8. To my students (ozark.hendrix.edu)
278 points · 174 comments · by marvinborner
We couldn't summarize this story. [src]
The discussion centers on a divide between academic idealism and industry pragmatism, with critics arguing that advising students to prioritize "elegant" code and slow refactoring is a path to unemployment in a market that values product delivery over code as an artifact [0][3][5]. While some praise the author's moral courage and the inclusion of ethics in CS education [2][9], others label the refusal to use LLMs as a "Luddite" stance that ignores current technological shifts [4][6][7]. There is a respectful debate regarding "generative AI vegetarianism," with some hoping for the development of models trained on ethical, out-of-copyright data to bridge the gap for those with principled objections [8].
9. Dutch central bank ditches AWS and chooses Lidl for European Cloud (techzine.eu)
319 points · 132 comments · by benterix
The Dutch central bank (DNB) is switching from American cloud providers to Schwarz Digits, the IT arm of Lidl owner Schwarz Group, to reduce geopolitical dependency and ensure data remains under European law via the sovereign Stackit platform. [src]
The Dutch central bank's move to Lidl’s cloud service (Schwarz Digits) has sparked surprise that a discount grocer can compete with American tech giants [1][3]. While some users advocate for using virtual machines and open-source tools to avoid vendor lock-in and ease the transition away from US-based infrastructure [0][6], others argue that managed services are often preferred because they reduce the need for specialized engineering staff and allow companies to focus entirely on their core products [7]. However, critics note that the lure of "cost optimization" through proprietary tools often leads back to deep dependency on a single provider [2][4], while some remain skeptical of replacing robust services like S3 with basic VM setups [5].
10. Show HN: OSS Agent I built topped the TerminalBench on Gemini-3-flash-preview (github.com)
325 points · 120 comments · by GodelNumbering
The open-source agent Dirac has surpassed both Google and top closed-source models to lead the TerminalBench 2.0 leaderboard with a 65.2% score, achieved without cheating or modifications to the evaluation harness. [src]
The discussion highlights how "harnesses"—the tools and context management surrounding a model—often impact performance more significantly than the underlying model itself [2][4]. Dirac achieves its high benchmark scores through specialized techniques like AST-based context fetching, batching operations, and "Hash-Anchored" edits to minimize token usage [0][8]. While some users question if these efficiency gains are primarily due to file skeletonization rather than the anchors themselves [3], others note that static analysis tools can be difficult for models to use effectively without aggressive steering [8][9].
11. United Wizards of the Coast (unitedwizardsofthecoast.com)
222 points · 207 comments · by d4mi3n
Workers on the *Magic: The Gathering Arena* team have formed United Wizards of the Coast-CWA, calling for voluntary recognition from leadership after a supermajority of eligible employees signed union cards to seek better working conditions and collective bargaining rights. [src]
The unionization of Wizards of the Coast employees has sparked debate over the necessity of collective bargaining in the tech and gaming sectors, with some users arguing that unions provide essential leverage against corporate power [3][4] while others question their value for high-wage software roles [1][9]. A primary driver for this movement is a controversial "intellectual property" clause where the company claims ownership of creative work produced by employees in their free time [0]. While some commenters express concern that unionization could threaten the long-term viability of products like MTG Arena [2], others suggest the industry's poor labor standards make organizing necessary, despite structural challenges like the threat of offshoring [7][9].
12. GitHub is having issues now (githubstatus.com)
310 points · 108 comments · by SenHeng
GitHub reports that all systems are currently operational, with normal performance across services including Git operations, API requests, Actions, and Copilot. [src]
Users are expressing growing frustration with GitHub’s reliability, noting that recent outages are increasingly impacting business operations and failing silently by displaying misleading information like empty pull request queues [1][2]. While some attribute the decline in uptime to the Microsoft acquisition, others argue that historical data may simply reflect improved outage tracking or recent "AI-induced scale increases" [0][7]. Consequently, there is a strong push toward alternatives, with users recommending self-hosted solutions like Gitea and Forgejo or platforms like Sourcehut to avoid data corruption risks and frequent downtime [4][5][9].
13. The Prompt API (developer.chrome.com)
260 points · 133 comments · by gslin
Chrome's new Prompt API allows developers to send natural language requests to the built-in Gemini Nano model for on-device AI tasks like summarization and content filtering. Currently in origin trials, the API supports multimodal inputs including text, image, and audio while maintaining user privacy by processing data locally. [src]
The Prompt API is viewed as a powerful tool for "de-snarkifying" social media by stripping aggression from comments while preserving factual content [0][7]. However, critics argue this could turn the internet into "average slop" by removing the unique flavor of human communication [3][9]. While the API offers a free, privacy-preserving alternative to paid subscriptions, significant concerns remain regarding the massive initial model download size and the potential for rogue scripts to hijack visitor hardware for decentralized compute tasks [1][2][5][8].
14. Quarkdown – Markdown with Superpowers (quarkdown.com)
290 points · 101 comments · by amai
Quarkdown is a free, open-source typesetting system that combines Markdown's simplicity with LaTeX's power to create professional papers, books, presentations, and websites using a Turing-complete scripting engine and live preview. [src]
The discussion centers on whether "superpowered" Markdown variants like Quarkdown enhance productivity or undermine the format's core appeal of simplicity and readability [1][3]. While some users appreciate the comparison to existing tools like Quarto and Typst for academic and programmable use cases [0][5][6], others argue that adding complex commands eventually necessitates a WYSIWYG editor, effectively reinventing Microsoft Word [1][3]. Despite these concerns, developers note that "natural selection" in the ecosystem favors feature-rich tools over standard renderers, as plain Markdown is already a saturated standard [9].
15. US Supreme Court reviews police use of cell location data (nytimes.com)
237 points · 142 comments · by unethical_ban
The U.S. Supreme Court is reviewing the constitutionality of "geofence warrants," which allow police to obtain location data from tech companies to identify all mobile devices present near a crime scene. [src]
The Supreme Court's review of geofence warrants has sparked debate over whether digital location data should be treated as personal "papers" or as third-party business records similar to bank security footage [1][3][8]. While some users praise Google for moving location data to local device storage to thwart these warrants, others criticize the change for degrading user experience and argue that most people do not fear court subpoenas [0][4][7][9]. Legal arguments center on whether the Fourth Amendment's protections are tied strictly to property ownership or if the massive "scope" of geofencing makes such searches inherently unreasonable [2][3][8].
16. Super ZSNES – GPU Powered SNES Emulator (zsnes.com)
271 points · 77 comments · by haunter
The original ZSNES developers have released Super ZSNES, a completely rewritten, GPU-powered emulator featuring a "Super Enhancement Engine" for high-resolution graphics, 3D Mode 7 effects, and uncompressed audio. [src]
The project evokes strong nostalgia for the original ZSNES, though some users find the transition to the new Unity-based interface "jarring" compared to the classic 90s aesthetic [0][6][7]. A major point of interest is the support for uncompressed audio replacements, which allows for high-fidelity soundtracks using original hardware samples found by community archivists [3][5]. Commenters also shared anecdotes about the technical struggles of early emulation, such as manually toggling layers to bypass transparency issues on underpowered hardware [1][9].
17. TurboQuant: A first-principles walkthrough (arkaung.github.io)
285 points · 58 comments · by kweezar
TurboQuant is a data-oblivious quantization method that compresses AI vectors to 2–4 bits by using random rotations to flatten outliers, achieving near-optimal accuracy and significant speedups in KV-cache compression and vector search without the memory overhead of per-block scale factors. [src]
The discussion centers on significant concerns regarding the novelty and academic attribution of TurboQuant, with experts arguing it is a restricted, sub-optimal version of the earlier EDEN quantization (2021) [0][4]. Critics point out that the technique—combining rotations with optimized grids—was previously established in several papers, including HIGGS and DRIVE, which TurboQuant failed to cite [1][7]. While some users are excited by the potential for these optimizations to enable running powerful models on local hardware [2], others clarify that KV cache compression primarily benefits inference concurrency rather than reducing model storage requirements [3]. In response to the criticism, the author of the explainer has committed to updating the post with proper citations of the prior literature [8].
18. Talkie: a 13B vintage language model from 1930 (talkie-lm.com)
257 points · 73 comments · by jekude
Researchers have introduced "talkie," a 13B parameter language model trained exclusively on pre-1931 historical texts to simulate a vintage persona. The project aims to advance AI research by studying model generalization, future-prediction capabilities, and the impact of training on data entirely free from modern web contamination. [src]
Talkie-1930, a model trained on vintage data, offers a window into early 20th-century perspectives, predicting a 2025 defined by universal peace, solar energy, and the eradication of disease [0]. Users noted that while the model captures the era's colonialist worldview and accurately forecasts Indian independence, it suffers from "temporal leakage" and historical inaccuracies, such as referring to a Queen instead of a King or using the name Constantinople [2][4][7]. The discussion also touches on the difficulty of predicting the future, comparing the model's optimism to post-WWII Bayesian predictions regarding nuclear warfare [3][5][9], and debates whether LLMs can truly fulfill Steve Jobs' vision of recreating historical figures like Aristotle given the loss of original training data [1][6].
19. FDA approves first gene therapy for treatment of genetic hearing loss (fda.gov)
230 points · 86 comments · by JeanKage
The FDA has approved Otarmeni, the first-ever gene therapy for pediatric and adult patients with severe-to-profound hearing loss caused by mutations in the *OTOF* gene. [src]
The FDA's approval of a gene therapy for OTOF mutations is celebrated as a shift from symptom management to curative medicine, with some users sharing personal success using genetic testing to prevent hereditary deafness [2][7]. However, the treatment sparks a philosophical debate between those who view deafness as a physical deficiency to be corrected and those who see it as a distinct linguistic culture and identity [0][3][9]. While some argue that rejecting treatment is a defensive mechanism that hinders a child's potential, others emphasize that the deaf community seeks to be viewed as competent individuals rather than "problems" in need of fixing [1][5]. Technical concerns also remain regarding the long-term safety of the viral delivery method and the therapy's systemic effects on other organs [6].
Brought to you by ALCAZAR. Protect what matters.