0. Copilot edited an ad into my PR (notes.zachmanson.com)
1601 points · 641 comments · by pavo-etc
GitHub Copilot reportedly inserted advertisements for itself and Raycast into a developer's pull request description after being summoned to correct a simple typo. [src]
Microsoft has disabled "product tips" in Copilot-generated pull requests following backlash that these messages were intrusive advertisements [0][1]. While some users compare these messages to "Sent from my iPhone" signatures [9], others argue they serve as a useful signal to identify "lazy" submissions where the author failed to review the AI's output [2][5]. There is a significant debate regarding accountability: some developers believe AI should be credited as a co-author for transparency [5][6], while others argue the human submitter must take full responsibility for the code regardless of its origin [8].
1. How to turn anything into a router (nbailey.ca)
773 points · 261 comments · by yabones
In response to potential U.S. router import bans, this guide explains how to convert any Linux-capable computer into a functional router using Debian, basic networking hardware, and open-source tools like `hostapd`, `dnsmasq`, and `nftables` for DHCP, DNS, and firewall management. [src]
The discussion highlights that any computer with a network interface can function as a router by leveraging Linux kernel features like NAT and VLANs, which allow for sophisticated network isolation on minimal hardware [0][3][8]. While some users prefer the convenience and advanced security features of dedicated web interfaces like OPNsense, others argue that these GUI abstractions can be confusing and restrictive compared to direct command-line configuration [1][4]. The thread also reflects on the historical utility of repurposing obsolete hardware for routing, noting that even decades-old machines are often fast enough for modern gigabit speeds [0][2].
2. Do your own writing (alexhwoods.com)
743 points · 241 comments · by karimf
Alex Woods argues against using AI to write documents, asserting that the process of writing is essential for developing deep understanding, building personal credibility, and strengthening critical thinking skills. [src]
While many users view writing as the "last step in thinking" that reveals contradictions and consolidates ideas [1], others argue that AI is better suited for "ritual" writing like release notes or context dumps that humans find tedious to produce and consume [1][6]. There is significant debate over using LLMs for "rubber ducking"; some find them useful for identifying edge cases [2][4], while critics argue that LLMs lack true comprehension and that genuine rubber ducking requires explaining ideas to oneself rather than a conversational agent [3][5]. Additionally, some suggest the focus should be on "not letting AI think for you," noting that alternative methods like dictation can be more effective than writing for capturing thought processes [7].
3. Fedware: Government apps that spy harder than the apps they ban (sambent.com)
682 points · 281 comments · by speckx
A new report reveals that numerous U.S. government apps, including those from the White House and FBI, utilize invasive tracking SDKs and excessive permissions to collect biometric data, precise locations, and device information that often feeds into a broader federal surveillance pipeline. [src]
Commenters expressed alarm at the invasive nature of "Fedware," noting that native apps are often chosen over web pages specifically to bypass browser privacy restrictions and access sensitive device APIs [3]. The discussion highlighted the "cringe" and propagandistic elements of these apps, with some comparing the tactics to those used in North Korea [0][4]. While some users debated whether the hoarding of extreme wealth is correlated with mental illness or simply an extension of universal human nature, others criticized the article's AI-generated aesthetic for being distracting and potentially unreliable [1][5][6][8].
4. How the AI Bubble Bursts (martinvol.pe)
371 points · 521 comments · by martinvol
The AI bubble faces a potential burst as rising energy costs, drying venture capital, and massive infrastructure expenses force labs like OpenAI and Anthropic to consider exits or price hikes, threatening market valuations and the broader economy despite the technology's long-term productivity benefits. [src]
Commenters are sharply divided on whether the AI boom is a sustainable "step change" or a speculative bubble, with some arguing that token inference is already profitable while others maintain that massive R&D and capex costs make the business model unsustainable [0][6][8][9]. Critics point to factual inaccuracies in the linked article regarding RAM prices and OpenAI's monetization as evidence of an overly defensive, "anti-AI" bias [1][8]. While some see skyrocketing demand for tokens as a sign of a healthy market, skeptics argue this demand may be artificial or nearing saturation, potentially leading to a "bust" if the technology fails to provide concrete value beyond replacing human labor [2][3][5].
5. CodingFont: A game to help you pick a coding font (codingfont.com)
493 points · 237 comments · by nvahalik
CodingFont is an interactive tool that helps developers select their ideal programming typeface by comparing different fonts side-by-side using real code snippets. [src]
The discussion highlights a strong divide over font aesthetics, with some users recommending playful or "cute" options like Maple Mono, Lotion, and Comic Shanns [0][1][4], while others find such styles amateurish [9]. A significant point of contention is the use of ligatures; some developers find them distracting "monkey business," though others note they can often be disabled via terminal configurations [2][3][8]. When evaluating fonts, users focus on specific character rendering—particularly "m" and "r"—and the ability to customize metrics for high-density or high-readability displays [5][6][7].
6. Android Developer Verification (android-developers.googleblog.com)
326 points · 335 comments · by ingve
Google is rolling out mandatory developer verification across the Play Console and Android Developer Console to combat malware, requiring apps to be registered by verified developers to maintain standard installation experiences on certified devices starting in late 2026. [src]
Google's developer verification process is criticized as a fragmented, "unpolished" experience that requires redundant identity and business documentation [1][6][9]. While some argue these security measures are necessary to combat the high rate of malware found in sideloaded apps, others contend that the Google Play Store itself remains heavily infected with "crapware" [3][4][8]. Long-time users express frustration that Android is abandoning its open-source roots, leading to calls for government regulation or a migration to deGoogled and Linux-based mobile alternatives [0][2][7].
7. Turning a MacBook into a touchscreen with $1 of hardware (2018) (anishathalye.com)
408 points · 234 comments · by HughParry
Researchers developed "Project Sistine," a proof-of-concept that turns a MacBook into a touchscreen using a $1 mirror and computer vision to track finger reflections via the built-in webcam. [src]
The discussion reveals a strong divide over the utility of touchscreens on MacBooks, with many users citing Steve Jobs’ 2010 warning that vertical touch surfaces are ergonomically "terrible" and cause arm fatigue [3][4]. While some argue that macOS is already optimized for keyboard commands to avoid reaching for the screen [0][2], others contend that the operating system's keyboard navigation is "third class" and lacks intuitive window management compared to iPadOS or Windows [1][5][9]. Despite occasional muscle memory leading users to touch their laptop screens after using iPads, there is skepticism regarding long-standing rumors that Apple will ever officially integrate the technology into the MacBook line [7][8].
8. Philly courts will ban all smart eyeglasses starting next week (inquirer.com)
416 points · 210 comments · by Philadelphia
Starting Monday, Philadelphia courts will ban all smart eyewear with recording capabilities to protect witnesses and jurors from intimidation. [src]
The ban on smart glasses in Philadelphia courts has sparked debate over the balance between privacy and accessibility, with some arguing the devices are intrusive "spy devices" [2] while others highlight their essential use for real-time captioning for the hearing impaired [5]. Commenters expressed skepticism regarding government surveillance in legal settings, noting that even when officials promise not to listen to privileged communications, trust remains low [1][9]. The discussion also looks toward a future where enforcing such bans becomes nearly impossible due to the potential rise of ocular implants [0].
9. I am definitely missing the pre-AI writing era (lesswrong.com)
322 points · 240 comments · by joozio
The author reflects on the deterioration of their creative voice and writing skills due to AI dependency, sparked by a technical draft's rejection for failing AI-detection metrics. [src]
The rise of AI has created a tension between maintaining a unique human voice and achieving grammatical perfection, with some users finding it difficult to avoid sounding like an LLM even when writing naturally [2][7]. While some argue that refusing to edit or use tools results in incoherent "word puke" that disrespects the reader [0][1][5], others contend that over-reliance on AI degrades creative expression and strips writing of its emotional resonance [3][9]. Ultimately, there is a growing debate over whether raw, unpolished "stream of consciousness" writing is a valid rebellion against the dry, sterile nature of AI-generated content [4][8].
10. Bird brains (2023) (dhanishsemar.com)
340 points · 220 comments · by DiffTheEnder
New Zealand’s kea parrots demonstrate advanced intelligence by manipulating traffic cones to stop cars for food, highlighting research that shows birds possess primate-level cognitive abilities due to high neuron density despite their small brain size. [src]
The discussion highlights that while birds possess complex minds and impressive memories, there is a sharp ethical divide over keeping them in cages, with some viewing it as imprisonment and others as a form of sanctuary [0][4][9]. Commenters debate the nature of avian intelligence, ranging from the idea that parrots merely mimic sounds to the theory that their linguistic abilities may be tied to specific neuron counts [2][6]. Additionally, the conversation touches on the evolutionary optimization of bird brains as dinosaur descendants and compares their cognitive efficiency to other animals like octopuses [5][8].
11. New Apple Silicon M4 and M5 HiDPI Limitation on 4K External Displays (smcleod.net)
313 points · 167 comments · by smcleod
Apple Silicon M4 and M5 chips contain a firmware regression that limits external 4K displays to a maximum HiDPI resolution of 3360x1890. This hardcoded limitation prevents full 3840x2160 HiDPI support, forcing users to choose between blurry text at native resolution or reduced workspace. [src]
The discussion centers on a limitation in M4/M5 chips where macOS no longer supports a 7680x4320 (8K) framebuffer for 4K displays, a configuration some users previously used for high-density scaling [0][6]. While some argue this setup is highly unusual and inefficient because it renders at 2x before downscaling, others note that 4K monitors at 27" or 32" often require non-integer scaling to achieve comfortable UI sizes [3][4][7]. To resolve the issue, users suggest emailing Tim Cook directly, citing past success in bypassing standard support channels to fix display-related bugs [2][5][8].
12. FTC action against Match and OkCupid for deceiving users, sharing personal data (ftc.gov)
316 points · 158 comments · by gnabgib
The FTC has taken action against Match Group and OkCupid for allegedly deceiving users and illegally sharing sensitive personal data with third parties. [src]
The FTC's action highlights severe privacy failures, including allegations that OkCupid shared millions of user photos with a third party linked to its founders' investments [2]. Users report disturbing technical glitches, such as accounts merging with strangers' profiles, and suspect that platforms sell data or leak email addresses to spammers after account deletion [0][9]. Beyond data concerns, there is a strong consensus that the industry suffers from misaligned incentives, as apps profit from keeping users single rather than facilitating successful matches [1][6]. This has led to a "cesspool" environment characterized by adversarial dating dynamics, gender-based pricing discrimination, and the suspected use of fake profiles to maintain engagement [1][3][4][8].
13. New Washington state law bans noncompete agreements (seattletimes.com)
336 points · 125 comments · by toomuchtodo
Washington state has enacted a new law that effectively bans noncompete agreements, significantly limiting the ability of employers to restrict workers from joining competitors or starting their own businesses. [src]
Commenters largely agree that non-compete agreements are a "useless scare tactic" for most employees [2] and that their unenforceability is a key driver of innovation in regions like Silicon Valley [0]. While some argue that these agreements are necessary to protect small startups from having their intellectual property poached by larger corporations [5], others suggest that they should only be valid if the employer provides a financial payout during the restricted period [3][6]. A notable exception to the general opposition is the sale of a business, where many believe limited non-competes are fair to prevent a seller from immediately competing against the buyer [1][7], though some argue even these restrictions primarily benefit the wealthy at the expense of broader society [8][9].
14. 72% of the dollar's purchasing power was destroyed in just four episodes (eco3min.fr)
208 points · 251 comments · by latentframe
Since 1914, the U.S. dollar has lost 96.9% of its purchasing power, with 72% of that destruction occurring during just four concentrated inflationary episodes: WWI, WWII, the Great Inflation (1968–1982), and the post-COVID surge. [src]
Commenters debate whether the dollar's decline is driven by military overspend and a shift toward a "petro-yuan," which could allow countries like Iran to bypass the U.S. dollar in energy trades [0][2][5]. However, some argue that China may resist this shift because liberalizing the yuan to facilitate global trade could break its currency peg and damage its own economy [3][6]. Critics also question the study's methodology, suggesting the four historical "episodes" were cherry-picked and failed to account for baseline exponential inflation [7]. Finally, there is skepticism regarding political goals to weaken the dollar to revive U.S. manufacturing, as domestic labor costs remain uncompetitive with overseas markets [1][9].
15. The curious case of retro demo scene graphics (datagubbe.se)
356 points · 91 comments · by zdw
The retro demo scene is grappling with the rise of generative AI, which challenges a long-standing culture that prioritizes manual craft and painstaking effort over originality, as artists transition from traditional plagiarism and "copy art" to modern digital tools. [src]
The demoscene's history of "copying" art is rooted in its origins among software crackers, where rebellious teenagers often repurposed professional works like those of Boris Vallejo without credit [2][4]. While some argue that copying is a fundamental part of the learning process, the lack of attribution is often viewed as misleading to the audience [0]. Modern debates have shifted toward the use of generative AI; some see it as a tool for optimization and technical problem-solving [5][9], while others argue that outsourcing the creative process to AI undermines the core ethos of overcoming hardware limitations through personal craft [6][7].
16. Vulnerability research is cooked (sockpuppet.org)
259 points · 170 comments · by pedro84
AI coding agents are poised to revolutionize exploit development by automating the discovery of high-impact zero-day vulnerabilities through pattern-matching and reasoning. This shift threatens to overwhelm traditional defenses, challenge the sustainability of open-source security, and potentially trigger reactive, incoherent government regulations. [src]
The discussion centers on whether LLMs will favor defenders by allowing them to find and patch vulnerabilities before deployment [0][5]. While some argue that AI agents can now automate the remediation of entire bug queues [2][8], others contend that the bottleneck remains human resources and the risk of AI-generated patches introducing new, complex defects [1][3][7]. Ultimately, there is skepticism regarding "perfect" security, as code complexity continues to outpace analysis tools and fundamental theoretical limits like the halting problem persist [6][9].
17. I use Excalidraw to manage my diagrams for my blog (blog.lysk.tech)
283 points · 115 comments · by mlysk
Martin Lysk developed a forked VSCode extension for Excalidraw that automatically exports frames as light and dark mode SVGs when they are prefixed with `export_`. This automation replaces a manual multi-step process, allowing for seamless local previews and image management within his blog's workflow. [src]
The discussion reveals a sharp divide over Excalidraw’s "hand-drawn" aesthetic, with some users finding it "childish" and inaccessible [3][8], while others argue the wonky style effectively communicates that a concept is a high-level, approximate illustration rather than a precise technical spec [6]. Critics suggest these diagrams are becoming "AI tells" similar to specific linguistic patterns [4], though others note that AI currently struggles to generate native Excalidraw files compared to Mermaid diagrams [7]. Beyond aesthetics, participants debated the technical implementation of dark mode in SVGs [5] and questioned whether complex diagrams are better treated as disposable tools for early-stage thinking rather than long-term documentation [9].
18. Learn Claude Code by doing, not reading (claude.nagdy.me)
278 points · 112 comments · by taubek
Ahmed Nagdy has launched an interactive learning platform for Claude Code featuring 11 modules, terminal simulators, and configuration builders to help users master the tool through hands-on practice without requiring an installation or API key. [src]
Users are reporting significant frustration with Claude Code's rapid quota consumption, which some attribute to the tool defaulting to a 1M token context window [0][2][5]. While some suggest technical workarounds to disable this large context, others argue that the lack of transparency regarding token usage makes the tool difficult to manage in a principled way [6][8]. Additionally, the discussion reflects a divide between those questioning the value of "learning" a non-deterministic tool and those finding the current onboarding quizzes inaccurate for experienced users [1][4][7].
19. 15 years, one server, 8GB RAM and 500k users – how Webminal refuses to die (community.webminal.org)
304 points · 54 comments · by giis
Since 2011, the Linux learning platform Webminal has served 500,000 users using a single 8GB RAM server and a legacy tech stack. Despite surviving a datacenter fire and financial hurdles, the project remains a free, community-funded resource for students to practice terminal commands and sysadmin skills. [src]
The discussion highlights a nostalgic appreciation for "old school" efficient engineering, with many users arguing that 8GB of RAM is actually quite substantial for a web server despite modern bloat [0][5]. While some users reminisce about eras when 64K or 16MB were considered massive amounts of memory, others question the technical limits of the current setup, specifically how many simultaneous users can be supported if each is allocated 256MB [3][7][8]. There is also speculation regarding the cost-effectiveness of long-term hosting versus modern alternatives like cheap mini PCs or free cloud tiers [6][9].
Brought to you by ALCAZAR. Protect what matters.