0. Microsoft and OpenAI end their exclusive and revenue-sharing deal (bloomberg.com)
827 points · 707 comments · by helsinkiandrew
Microsoft and OpenAI have ended their exclusive revenue-sharing agreement, transitioning to a non-exclusive partnership that allows both companies to collaborate with other industry players. [src]
The termination of the exclusive deal is seen as a move to prevent OpenAI from being "kneecapped" by Microsoft’s limitations, potentially allowing OpenAI to utilize Google’s superior TPU hardware [1][3]. While some argue that current AI models are merely "random token generators" lacking a true moat or thought process [2][7], others contend that the rapid progress in latent space encoding and robotics suggests we are witnessing the emergence of a new kind of intelligence [4][8][9]. Skepticism remains high regarding the industry's shifting definitions of AGI, with critics labeling the term a marketing narrative rather than a scientific reality [0][6].
1. GitHub Copilot is moving to usage-based billing (github.blog)
607 points · 447 comments · by frizlab
Starting June 1, 2026, GitHub Copilot will transition to usage-based billing, replacing premium request units with monthly allotments of GitHub AI Credits while keeping base plan prices unchanged. [src]
The shift to usage-based billing marks the end of "subsidized inference," a ZIRP-era strategy where Microsoft burned capital to gain market stickiness [0][1]. Users are particularly alarmed by massive multiplier increases, such as Claude Opus jumping from 3x to 27x, which effectively ends the ability to consume hundreds of dollars in tokens for a flat $10 monthly fee [6][8]. Many commenters now see little incentive to stay with GitHub Copilot, arguing that pay-as-you-go providers like OpenRouter or cheaper models like DeepSeek offer better value without forcing a monthly minimum spend [2][4][5][9]. Despite these price hikes, some believe costs will eventually stabilize as open-source models improve and diminishing returns on model size make "good enough" inference a commodity [7].
2. Men who stare at walls (alexselimov.com)
513 points · 227 comments · by aselimov3
To combat information overload and brain fog, Alex Selimov suggests a routine of staring at a wall for five to ten minutes to recover focus and reset the mind during periods of low productivity. [src]
Commenters largely agree that "staring at a wall" is a form of meditation, specifically mirroring the Soto Zen tradition of sitting for long periods to return the mind to the present [0][2][4]. While some view it as a necessary recovery of "disattention" or downtime stolen by smartphones [1], others debate whether it should be used as a productivity hack or if simply taking a walk would be more effective for burnout [7][9]. Experienced practitioners emphasize that true meditation requires intense willpower to monitor internal monologues, though even "inventing" the practice independently can provide significant benefits like increased patience and reduced fear [2][4][5].
3. Is my blue your blue? (ismy.blue)
436 points · 299 comments · by theogravity
This interactive test allows users to determine their personal threshold for categorizing shades as either blue or green to see how their color perception compares to others. [src]
Users expressed frustration with the test's binary choice, arguing that forcing a "blue" or "green" label on colors like cyan or turquoise is as nonsensical as asking if a middle-latitude city is in Canada or Mexico [0][1][9]. While some argue the forced choice is necessary to pinpoint a specific boundary on the color spectrum [4][7], others found the results illuminating, with one user discovering their personal boundary was greener than 95% of the population [3][8]. The thread also touches on the classic philosophical question of whether individuals experience the same internal qualia for colors, regardless of the labels they are taught [6].
4. 4TB of voice samples just stolen from 40k AI contractors at Mercor (app.oravys.com)
494 points · 176 comments · by Oravys
The extortion group Lapsus$ reportedly stole four terabytes of data from Mercor, exposing the voice samples and government IDs of 40,000 AI contractors to potential identity theft and sophisticated voice-cloning attacks. [src]
The breach highlights the irreversible nature of biometric data theft, as victims cannot "rotate" their voices like passwords once they are leaked [2][4]. Commenters noted the irony of a security firm offering to analyze stolen samples by requesting even more voice data, while criticizing how "explicit consent" is often buried in terms of service for workers needing a paycheck [0][2][5]. The discussion emphasizes the German concept of *Datensparsamkeit* (data frugality), lamenting that the AI era has replaced data liability concerns with an insatiable drive to collect all possible information [1][3][6].
5. Pgbackrest is no longer being maintained (github.com)
408 points · 218 comments · by c0l0
The lead developer of pgBackRest has announced the project is no longer being maintained due to a lack of corporate sponsorship and the need to pursue other employment. [src]
The sudden end of pgBackRest maintenance highlights the fragility of critical open-source infrastructure that relies on corporate sponsorship, which can vanish following mergers and acquisitions [3]. While users expressed deep sadness and concern for their production databases, critics pointed out that few users contributed back or were willing to pay for the value they received [0][2][7]. The discussion reflects a broader debate on the need for sustainable funding models, such as tiered pricing based on company revenue, to prevent maintainer burnout and project abandonment [4][6][7].
6. China blocks Meta's acquisition of AI startup Manus (cnbc.com)
351 points · 244 comments · by yakkomajuri
China has blocked Meta’s attempted acquisition of the AI startup Manus, marking a significant intervention by Chinese regulators into a foreign purchase of a domestic artificial intelligence firm. [src]
The discussion centers on China's intervention in Meta's acquisition of Manus, specifically the "sinister" detention of the startup's founders to force an annulment of the deal [0][2][8]. Commentators debate whether this is a unique act of state-sponsored hostage-taking or a standard geopolitical "playbook" used by empires to prevent "Singapore-washing" and the loss of domestic talent [3][4][7]. While some argue the U.S. uses similar economic and military coercion, others contend that holding citizens without criminal charges to unwind foreign business transactions is a distinct escalation by the CCP [4][8][9].
7. “Why not just use Lean?” (lawrencecpaulson.github.io)
270 points · 188 comments · by ibobev
Computer scientist Lawrence Paulson argues against the perceived dominance of Lean in formal mathematics, highlighting the historical successes of systems like AUTOMATH and Isabelle while advocating for Isabelle’s superior automation, legibility, and avoidance of the complexities associated with dependent types. [src]
The discussion centers on Lean's pragmatic adoption of classical logic via the Mathlib library, which facilitates complex mathematical proofs by allowing the law of the excluded middle and double negation elimination [0][1]. While some users argue that constructive (intuitionistic) logic is more natural for programming because its proofs correspond directly to data structures [3][6], others contend that classical logic remains the standard for proving algorithm correctness [7][8]. Despite criticisms that Lean may be less "elegant" or powerful in specific areas compared to Agda or Coq, it is praised for its versatility and large community [2][5].
8. To my students (ozark.hendrix.edu)
278 points · 174 comments · by marvinborner
We couldn't summarize this story. [src]
The discussion centers on a divide between academic idealism and industry pragmatism, with critics arguing that advising students to prioritize "elegant" code and slow refactoring is a path to unemployment in a market that values product delivery over code as an artifact [0][3][5]. While some praise the author's moral courage and the inclusion of ethics in CS education [2][9], others label the refusal to use LLMs as a "Luddite" stance that ignores current technological shifts [4][6][7]. There is a respectful debate regarding "generative AI vegetarianism," with some hoping for the development of models trained on ethical, out-of-copyright data to bridge the gap for those with principled objections [8].
9. Dutch central bank ditches AWS and chooses Lidl for European Cloud (techzine.eu)
319 points · 132 comments · by benterix
The Dutch central bank (DNB) is switching from American cloud providers to Schwarz Digits, the IT arm of Lidl owner Schwarz Group, to reduce geopolitical dependency and ensure data remains under European law via the sovereign Stackit platform. [src]
The Dutch central bank's move to Lidl’s cloud service (Schwarz Digits) has sparked surprise that a discount grocer can compete with American tech giants [1][3]. While some users advocate for using virtual machines and open-source tools to avoid vendor lock-in and ease the transition away from US-based infrastructure [0][6], others argue that managed services are often preferred because they reduce the need for specialized engineering staff and allow companies to focus entirely on their core products [7]. However, critics note that the lure of "cost optimization" through proprietary tools often leads back to deep dependency on a single provider [2][4], while some remain skeptical of replacing robust services like S3 with basic VM setups [5].
Brought to you by ALCAZAR. Protect what matters.