In honor of Tim Cook’s pending retirement, Federico, Myke, and Stephen rank some of his quotes from the last 15 years.
This was quite the trip down memory lane.
In honor of Tim Cook’s pending retirement, Federico, Myke, and Stephen rank some of his quotes from the last 15 years.
This was quite the trip down memory lane.
Hank Green has made something really cool. Called the Artemis II Photo Timeline, it’s an interactive way to scroll through photos from NASA’s recent crewed mission to cislunar space — but pinned to NASA’s official schedule of the mission.
It is also a tribute to publicly available data. Though the timeline includes some videos published to Instagram and YouTube, the vast majority are images from Flickr. NASA usually uploads them with EXIF data intact, and Flickr preserves it. NASA also provided the mission schedule and, even better, has a public API for the position of the Orion spacecraft at any given time. Which means Green was also able to correlate the photos with where they were taken along the craft’s trajectory.
But why are these images on Flickr? Anil Dash explains:
Here’s the TL;DR:
- Flickr comes from (and helped start!) the Web 2.0 era, which was based on users having control over their data
- Tools at that time began giving creators the power to decide what license they wanted to release their content under, including permissions about how it could be shared, used, or remixed
- Because the people who made platforms back then were users and creators themselves, they thought about the long term and wanted to be able to preserve people’s work
- After lots of corporate shuffling, Flickr ended up in the hands of a family-owned company, SmugMug, and they made the Flickr Foundation to preserve public photos for the next 100 years
- NASA’s images should only be on a service where they can be stored in full resolution, for the long term, dedicated to the public domain — which the other social media apps of today can’t do
Did you know that which astronaut took which photo is not public? Hank Green explains:
A previous version of this site showed some data on which astronaut took which photo, but it was brought to my attention that the four astronauts together agreed that they did not want credit for any photos taken on the mission. I’m somewhat conflicted about this because this project is about giving as much context as possible, but of course there is also something very beautiful about not wanting to take individual credit for something that was the result of so much collaboration.
An NBER study of nearly 6,000 CEOs and CFOs across the US, UK, Germany, and Australia found that roughly 90% of firms reported zero measurable impact on productivity or employment from AI over the past three years.
The average employee AI usage was 1.5 hours per week.
The average CEO AI usage was less than one hour per week.
Meanwhile, their companies are pouring money into the $690 billion AI infrastructure buildout that, according to Sequoia, needs $600 billion in annual revenue to justify itself (but currently generates maybe $50-100 billion).
Only one in five AI investments delivers any measurable ROI. Only one in 50 delivers transformational value. And 95% of enterprise AI pilots fail to escape the lab.
OpenAI, in a nameless blog post:
Mass shootings, threats against public officials, bombing attempts, and attacks on communities and individuals are an unacceptable and grave reality in today’s world. These incidents are a reminder of how real the threat of violence is—and how quickly violent intent can move from words to action.
People may also bring these moments and feelings into ChatGPT. They may ask questions about the news, try to understand what happened, express fear or anger, or talk about violence in ways that are fictional, historical, political, personal, or potentially dangerous. We work to train ChatGPT to recognize the difference—and to draw lines when a conversation starts to move toward threats, potential harm to others, or real-world planning.
We’re sharing what we do to minimize uses of our services in furtherance of violence or other harm: how our models are trained to respond safely, how our systems detect potential risk of harm, and what actions we take when someone violates our policies. We are constantly improving the steps we take to help protect people and communities, guided by input from psychologists, psychiatrists, civil liberties and law enforcement experts, and others who help us navigate difficult decisions around safety, privacy, and democratized access.
Maggie Harrison Dupré, writing at Futurism:
Reading it, someone with limited context would come away with the impression that the company was talking about concerns that were still theoretical: that it’s proactively trying to head off bad things that might happen.
That suggestion is bizarre, though, because the reality is that OpenAI’s flagship chatbot has already been linked to a wide range of real-world violence.
In fact, the most extraordinary thing that OpenAI neglected to mention was what almost certainly motivated the post in the first place: the company published the blog as news organizations — Futurism included — were reaching out to ask the company for comment on a new round of seven lawsuits it’s facing from the families of the victims of the February school massacre in Tumbler Ridge, British Columbia, which would be made public the next day.
Though the blog post made no mention of it, the Tumbler Ridge shooter was a ChatGPT user. Weeks after the tragedy rocked the rural town in February of this year, the Wall Street Journal revealed that back in June 2025, OpenAI’s automated moderation tools had flagged the shooter’s account for graphic descriptions of gun violence. Human reviewers were so alarmed that several pushed OpenAI leaders to alert local officials. Those leaders chose not to, and the company moved instead to deactivate that specific account; as OpenAI later admitted, though, the shooter simply opened a new account — a tactic that OpenAI’s customer service has been found encouraging users to do post-deactivation — and continued to use the service.
Last week, Sam Altman offered an apology to the Tumbler Ridge community, writing:
I want to express my deepest condolences to the entire community. No one should ever have to endure a tragedy like this. I cannot imagine anything worse in this world than losing a child. My heart remains with the victims, their families, all members of the community, and the province of British Columbia.
I am deeply sorry that we did not alert law enforcement to the account that was banned in June. While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.
I reaffirm the commitment I made to the Mayor and the Premier to find ways to prevent tragedies like this in the future. Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again.
Today, we shipped a huge update to Pedometer++. We have full details over on the Pedometer++ blog, and David has a post up as well:
Today I’m beyond delighted to announce the release of Pedometer++ version 8. I worked with legendary designer Rafa Conde to re-design the appearance and layout of the watchOS app to make it the most capable, yet intuitive, walking app on the App Store.
Pedometer++ has been on the Apple Watch from day one twelve years ago. Over that time I’ve built dozens of designs and features, today’s redesign learns from that journey and arrives at an incredible place.
The all-new step counter is both familiar and modern:
![]()
Expedition Mode is a new way to extend your Apple Watch’s battery life when on longer walks, hikes, or runs by disabling constant heart rate tracking and instead relying only on the basic heart rate tracking the Apple Watch provides. Based on our long-term testing, you can expect up to a 40 percent improvement in battery life with Expedition Mode. It’s wild.
The rest of the watchOS app has been overhauled as well. The workout screens have been redesigned, and the new maps are great. Here’s David again:
If you’re a premium subscriber when you start a workout you’ll be immediately brought to your new maps screen which shows your workout on a live updating map. This map will overlay your planned route, if selected.
This screen now features our completely custom dark mode map. I worked with a cartographer to design a map which looks perfectly at home on the Apple Watch, which is highly legible even at arms length and includes all the topographic and wayfinding information you need to keep you on track.
I mean… come on:
![]()
Over on MacStories, John Voorhees wrote:
Apple is due for an Apple Watch renaissance. It’s a great device, but my use of it hasn’t changed a lot over the years. I track workouts, check notifications and the weather, and, well, check the time.
What Pedometer++ shows is that there’s untapped potential there. Even before WWDC, there’s more room to experiment and delight Apple Watch users than most developers are taking advantage of. I wouldn’t be surprised if David senses an opportunity on the horizon, too.
David has been working on parts of this update for years, and it really shows. We couldn’t be prouder of how it turned out. Pedometer++ 8.0 is in the App Store now.
The U.S. Senate Committee on Commerce, Science, and Transportation, in a press release today:
U.S. Senate Commerce Committee Chairman Ted Cruz (R-Texas) and Senators Brian Schatz (D-Hawaii), John Curtis (R-Utah), and Adam Schiff (D-Calif.) today introduced the CHATBOT Act, legislation that would put parents, not Big Tech, in charge of how children and teens interact with AI chatbots.
While AI chatbots can support a child’s learning, research, and creativity, they also pose real risks to minors, including exposure to inappropriate content, language, and addictive features. Some AI companies have even deployed rewards, notifications, and targeted advertising to drive prolonged engagement by adolescent users.
The Children’s Health, Advancement, Trust, Boundaries, and Oversight in Technology Act, or CHATBOT Act, would require AI companies to establish “family accounts” for parents to manage access and usage of AI chatbots by their children. AI chatbots would limit manipulative design features; require parental consent for chatbot usage and parental controls to access and monitor a child’s conversations with a chatbot; and prohibit targeted advertising to children. In addition, the bill would direct further study on potential chatbot-related harms to children and best practices for parents.
(When your products are so unpopular and flawed that Ted Cruz and Adam Schiff agree that something should be done, you know it’s bad.)
Here’s a bit from the bill’s one-pager:
Reports have alleged that some AI chatbots have encouraged self-harm, fostered emotional dependency, and exposed minors to sexually explicit content. Research notes that chatbots may also pose developmental risks, such as weakening memory recall and ability to distinguish between human and non-human relationships. Those dangers can grow more acute during prolonged interactions. Some companies use rewards, nudges, and notifications that can keep children hooked on conversations. They may even exploit a child’s or teen’s data for targeted advertising and incentivize minors to spend money inside these systems.
In addition to questions about whether design choices have considered the wellbeing of children, parents should be empowered to limit harmful features, protect privacy, and guide how these systems interact with their children. Policymakers, educators, and families need greater insight into how these tools can be safely used by children while protecting mental health and social development.
The solutions proposed by the legislation aren’t bad, but they don’t go far enough. If usage limits and other safeguards have failed our young children when it comes to social media, these tools don’t stand a chance when it comes ChatGPT, Gemini, Claude, and others.
Legislation should not put all of the responsibility for safety on parents. AI companies need to be regulated, and their products need strict safeguards in place when they are used by children. This bill would forbid companies from using minors’ personal data for targeted advertising and require them to build some basic tools for parents, but it does very little to address the addictive and harmful aspects of these products.
If you have any doubt about how inept Congress is when it comes to technology, look no further than the file name for the full text of the bill:
C:\Users\LAN\AppData\Local\Temp\LAN26253.loc
Is that a dumb thing to point out? Obviously. Is this ACT better than nothing? Of course. Do I think AI companies will continue to do what they want, how they want? Yep.
I was honored to join Eric Schwarz for the first episode of his new podcast, named Magical & Revolutionary. We had a wide-ranging conversation about my background and career, touching on the weirdness of covering large companies, my issues with xAI’s presence here in Memphis, and a lot more.
In a world of companies burning money and resources at a breathtaking rate, Nilay Patel’s essay on the state of AI offers a refreshing level of clarity.
The next time someone asks me what I think about AI, I will send this video with a note that I agree with all of it.
AI is the most complex thing to happen to the technology industry, and Patel nails many of the reasons why.
Here is a bit of his argument, after he outlines just how unpopular AI has become in the real world:
I also think it’s incredibly important for our politicians and tech executives to make sure our political process makes people feel empowered, not helpless, which is a specific kind of nihilism they have all greatly contributed to. The violence is a result of that helplessness and nihilism, and the most powerful people in our society ought to reckon with that, especially as they run around saying AI will wipe out all the jobs. I’m not even exaggerating about that — here’s Anthropic CEO Dario Amodei saying he thinks AI will wipe out all the jobs:
Dario Amodei: Entry-level jobs in areas like finance, consulting, tech and many other areas like that —- entry-level white-collar work — I worry that those things are going to be first augmented, but before long replaced by AI systems. We may indeed — it’s hard to predict the future — but we may indeed have a serious employment crisis on our hands as the pipeline for this early-stage, white-collar work starts to contract and dry up.
What I see when I encounter clips like this is the true gap between the tech industry and regular people when it comes to AI — the limit of software brain. Like I said, everyone in tech understands how much regular people dislike AI. What I think they’re missing is why. They think this is a marketing problem. OpenAI just spent $200 million on the TBPN podcast because the company thinks it will help make people like AI more. Sam Altman has said so explicitly:
Sam Altman: Oh, they are genius marketers and I would love to have better marketing. Somebody said to me recently that if AI were a political candidate, it would be the least popular political candidate in history. And given the amazing things AI can do, I think there’s got to be better marketing for AI.
It feels like someone just needs to say this clearly, so I’m just going to do it. AI doesn’t have a marketing problem. People experience these tools every single day! ChatGPT has 900 million weekly users, trending to a billion, and everyone has seen AI Overviews in Google Search and massive amounts of slop on their feeds.
You can’t advertise people out of reacting to their own experiences. This is a fundamental disconnect between how tech people with software brains see the world and how regular people are living their lives.
As long as Dario Amodei, Sam Altman, and their peers are dressed up as pilots, I’m not sure I want to be on the plane. Nihilism without a parachute doesn’t sit well with me.
John Gruber, in his link to the video:
Something is profoundly off in the computer industry when it comes to software broadly and AI specifically. It’s up for debate what exactly is off and what should be done about it, but the undeniable proof that something is profoundly off is the deep unpopularity surrounding everything related to AI. You can’t argue that the public always turns against groundbreaking technology. The last two epoch-defining shifts in technology were the smartphone in the 2000s, and the Internet/web in the 1990s. Neither of those moments generated this sort of mainstream popular backlash. I’d say in both of those cases, regular people were optimistically curious. The single most distinctive thing about “AI” today is the vociferous public opposition to it and deeply pessimistic expectations about what it’s going to do.
The comparison to the 90s is a good one. We still had websites after the dot-com bubble, and we will have AI tools after this bubble bursts. John is right though; I don’t think many people were opposed to online shopping in a way some are opposed to the rise of LLMs.
From a financial standpoint, thinking that the 2020s are just the 1990s on repeat is short-sighted; the horrifying deals between AI companies and the likes of Nvidia and Coreweave make the late 1990s look like child’s play.
The truth is simple: our economic and social moment is in the hands of people who do not understand the power they wield. They write handwringing essays about the dangers of new models with one hand, while cashing checks with the other.
Many people believe that AI is inevitable. “Get onboard or get left behind” is the tone that people and companies are taking more every day. In their worldview, to be concerned about AI is to be missing the most important change we’ve seen in technology (possibly) ever. Expressing worry is considered naive and against progress. The desire to slow down isn’t understood by some of these folks.
Look, I’m not dumb enough to believe the genie can be put back in the bottle, but I’m also smart enough to know that we have no idea what we’re doing.
Waiting and hoping for government regulation to save jobs, limit environmental damage, and rein in the mass data collection required to feed LLMs is not a plan. Elected officials are not equipped to move quickly enough to keep up; industry leaders are incentivized to push harder into the unknown.
The two may never meet in time.
The dangers of AI are both overwhelmingly large and heartbreakingly personal.
Mass layoffs and environmental concerns feel too big to wrap our arms around. Reading stories about people who have harmed themselves (and others) after spending time with LLM-powered chatbots feels too brutal to fully understand.
Turning the world into software inevitably includes these tradeoffs, as Nilay Patel continues:
I’ve reviewed a lot of tech products over the past decade and a half, and all I can tell you is that it is a failure when you ask people to adapt to computers. Computers should adapt to people. Asking people to make themselves more legible to software — to turn themselves into a database — is a doomed idea.
It’s an ask so big that I can’t imagine a reward that would make it worth it for anyone, even if the tech industry wasn’t constantly talking about how AI will eliminate all the jobs, require a wholesale rethinking of the social contract and — oops — also the latest models might cause catastrophic cybersecurity problems that might lead to the end of the world.
Does this sound like a good deal to you? Can you market your way out of this? This only makes sense if you have software brain — if your operative framework is to flatten everything into databases that you can control with structured language. The people paying thousands of dollars a month to set up swarms of OpenClaw agents and write thousands of lines of code are people who look at the world and see opportunities for automation, to repeat tasks, to collect data. To build software. AI is great for them. It’s even exciting in ways that I think are important and will probably change our relationship to computers forever.
For everyone else, AI is just a demanding slop monster. It’s a threat. I’m not saying regular people don’t use Excel or Airtable to plan their weddings or have fun throwing PowerPoint parties, or even that AI won’t be useful to regular people over time. I think a lot of people enjoy data and tracking different parts of their lives. I’m wearing a Whoop band as I write this. I’m just saying these things aren’t everything. Not everything about our lives can be measured and automated and optimized, and it shouldn’t be.
In the tidal wave of cash and influence that is currently swelling, logic has been washed away. If my company were burning billions of dollars a year on increasingly unpopular products, I would have lost my job many times over.
Instead, the Silicon Valley rich and powerful keep getting richer and more powerful, at the expense of their users and the planet. AI is capable of incredible things, but it is ushering in terrible things at the same time. To ignore that is both naive and foolish.
Last night before going to bed, I told my iPhone to install iOS 26.4.2 and when I picked it up this morning, I was greeted by a Control Center bug that has been around since iOS 26 first launched:
![]()
For months, HomeKit controls will be missing after an iOS update or device restart. In this case, I am missing controls for my garage door, my thermostats, and a couple of scenes.
All of those items are in the Home app itself, and are still fully functional. Even weirder is that tapping on a broken control one reveals what it should be:
![]()
In my experience, the controls will heal themselves with a little time. I suspect that some time later today, they’ll all be back. In the meantime, it’s a reminder of a frustrating bug that has been around too long.
I’ve only seen this behavior with HomeKit controls, so I’ve filed my feedback with Apple as a “Home app & HomeKit / Matter Accessories” issue. It can be found as FB22601988.
Stephen plays audio from a website, Federico prefers to talk about products, Myke takes a victory lap, Tim announces his retirement, and John gets a new job.
I think the most interesting company in the personal computer space may be Framework, the small company dedicated to making repairable and upgradeable notebooks and desktops. It launched its first laptop — named the Framework Laptop 13 — in 2021 and you can still replace and upgrade components in it five years later. It started with an 11th-gen Intel Core processor, but now can run up to AMD Ryzen AI 9 HX 370. A bunch of other things have been added as well, including support for Wi-Fi 7, a 2.8K display, more robust keyboard, and more. All of that is on top of being able to replace the SSD, battery, and RAM in just a few minutes.
In those same five years, Framework has launched two additional notebooks and a desktop. Each of these products has its own tradeoffs and features, meaning just about anyone interested in something like that can find a machine that meets their needs.
This week, the company introduced a fourth laptop, the Laptop 13 Pro. Here’s a bit from the press release:
Today, we’re happy to introduce Framework Laptop 13 Pro, a complete ground up redesign that brings a massive leap in battery life with Intel’s Core Ultra Series 3 Processors, a 74Wh battery, and LPCAMM2 memory, a new full CNC aluminum chassis, our first purpose-built power-optimized display with touch support, an excellent feeling haptic touchpad, an option for pre-loaded Ubuntu, and much more. In many ways, this product has been six years in the making. We’ve taken all of the feedback you’ve given us on the first seven generations of Framework Laptop 13 to make this the ultimate portable developer and power user machine. With all of this, it’s still a Framework Laptop, meaning it’s repairable, upgradeable, customizable, and entirely yours to do what you want with. Framework Laptop 13 Pro is available to pre-order today, starting at $1,199 USD for DIY Edition and $1,499 USD for pre-built configurations, with first shipments in June.
There’s also a walk-through video on the company’s YouTube channel:
The new aluminum chassis — and its guts — are backward- and forward-compatible with the original Laptop 13. The touchscreen has a matte finish that seems incredibly impressive. LPCAMM2 memory means users can upgrade RAM later. The Laptop 13 Pro retains swappable expansion cards, which make changing the ports on the machine trivial. Keeping up with everything you can change about the laptop is super simple, thanks to Framework’s website.
![]()
Somewhere, Jony Ive is breathing heavily into a paper bag.
It seems like Framework has really taken what was groundbreaking about its original machine and made it even better. That’s impressive for such a small and young company, but in the video announcing the Laptop 13 Pro, Nirav Patel said something really interesting:
How do we build a MacBook Pro for Linux users?
I’m sure a bunch of Mac users would answer that question by laughing at Patel, but I think the question is fascinating.
The 13 Laptop Pro resembles Mac hardware thanks to its dark aluminum enclosure, which seems like a huge improvement over the older systems. Many reviews of previous Framework hardware have complained about issues like flexing top cases and weird seams between parts. Those things were assumed to be an unavoidable side effect of making a notebook that can be taken apart and rebuilt in a matter of minutes. It seems the company has addressed some of those issues with this new model.
On the other hand, Framework’s customizable, upgradable hardware stands in stark contrast to modern Apple hardware, which is increasingly consolidated onto single, dense logic boards. The MacBook Neo may be more repairable than previous machines, but even it falls short if Framework’s philosophy is the goal.
The second part of Patel’s question is more interesting than the first. Building a notebook for Linux users has historically been a tricky thing for a few reasons. Framework is already seeing success here, but clearly it wants to continue to grow its brand in the Linux world.
The first is hardware support. While it is better than it used to be, Linux users can run into weird driver issues and other complications, especially with notebooks. Framework has worked with Ubuntu and Fedora directly to support those distros, with many other options supported by the community. Combined with the ability to upgrade hardware over time and the Laptop 13 Pro’s impressive battery life, having a good Linux experience on a notebook should be easier than ever.
(It’s important to note that Framework has sponsored a couple of projects that they definitely should not have sponsored. Yikes. I hope the plan Patel outlined in that thread keeps them from making such mistakes in the future.)
Microsoft has crammed advertisements and AI features into every nook and cranny of Windows 11, leaving power users frustrated. Changes may be on the horizon, but Microsoft has a long way to go to repair those relationships.
In the meantime, Framework is positioning itself as an alternative to how things are normally done in the notebook world. I think that’s worth being excited about, if it’s your cup of tea or not.
A very official and normal statement from the President:
I have always been a big fan of Tim Cook, and likewise, Steve Jobs, but if Steve was not taken from the Planet Earth so young, and ran the company instead of Tim, the company would have done well, but nowhere near as well as it has under Tim. For me it began with a phone call from Tim at the beginning of my First Term. He had a fairly large problem that only I, as President, could fix. Most people would have paid millions of dollars to a consultant, who I probably would not have known, but who would say that he knew me well. The fees would be paid but the job would not have gotten done. When I got the call I said, wow, it’s Tim Apple (Cook!) calling, how big is that? I was very impressed with myself to have the head of Apple calling to “kiss my ass.” Anyway, he explained his problem, a tough one it was, I felt he was right and got it taken care of, quickly and effectively. That was the beginning of a long and very nice relationship. During my five years as President, Tim would call me, but never too much, and I would help him where I could. Years latter, after 3 or 4 BIG HELPS, I started to say to people, anyone who would listen, that this guy is an amazing manager and leader. He makes these calls to me, I help him out (but not always, because he will, on occasion, be too aggressive in his ask!), and he gets the job done, QUICKLY, without a dime being given to those very expensive (millions of dollars!) consultants around town who sometimes get it done, and sometimes don’t. Anyway, Tim Cook had an AMAZING career, almost incomparable, and will go on and continue to do great work for Apple, and whatever else he chooses to work on. Quite simply, Tim Cook is an incredible guy!!! President DONALD J. TRUMP
Yes, that quote is exactly as it was written. I don’t think anyone has an obligation to clean up Trump’s bonkers writing.