I don’t keep text-based social media apps installed on my phone, but if I did, this would be this one I’d use for Mastodon and Bluesky.
Building Artemis II’s Fault-Tolerant Computer
Logan Kugler, writing in early April:
The computer system aboard the current Artemis II lunar space mission is from a different world that the one from the Apollo era. Apollo astronauts navigated to the lunar surface using a computer with a 1-MHz processor and roughly 4 kilobytes of erasable memory, supported by a larger store of fixed “rope” memory. While it was a marvel of 1960s engineering, the Apollo Guidance Computer’s functional scope was focused and not in the control loop for every system. Critical environmental and power controls were managed through manual or electromechanical means, such as switches and relays.
This month’s Artemis II mission carrying a crew of four around the Moon for the first time in over 50 years is supported by one of the most fault-tolerant computer system built for spaceflight. Unlike Apollo, the Orion capsule’s computing architecture manages nearly all of the vessel’s safety-critical functions, from life support to communication routing.
When a mission is 250,000 miles from Earth, failure is unrecoverable. There are no runways for emergency landings and no technicians to swap out a fried motherboard. Every subsystem must be designed to survive cosmic-ray bit flips, radiation-induced latch-ups, and hardware faults without a single second of downtime.
I love stories about computers in space.
Clarus Can Walk
With today’s update to its Developer app, Apple released a new sticker of Clarus the Dogcow. I am all for that, but seeing her walk is weird.
![]()
As far as I know, this is the first time the Dogcow has even been shown with more than two legs. This feels wrong.
Connected 602: Computer Too Good ⇢
Myke compliments Federico, and Stephen has gone down a rabbit hole with Casey Liss leading the way. Also: Apple continues to adjust its Mac lineup as the memory crisis drags on, and the guys have some jobs for John.
Google Fitbit Air ⇢
I love the health-tracking features of my Apple Watch (duh) but there are times that I’d rather wear a watch without Slack or iMessage on it. For years, I’ve clamored for Apple to make a screenless tracker like the Whoop band. It seems like Google beat Apple to the punch with its new Fitbit Air:
![]()
Samantha Kelly has more at Bloomberg:
The new device bears a striking resemblance to Whoop’s health tracker, featuring a soft fabric band with a battery and sensor pack underneath. One big difference is the business model: an upfront cost to buy the hardware and an optional $10 per month Google Health subscription. Whoop doesn’t charge for its hardware but instead has an annual subscription fee that begins at $200.
The Fitbit Air may appeal to users seeking a simpler alternative to the Apple Watch — one with fewer distractions and notifications — or a cheaper option than rival health trackers. The popular Oura Ring health tracker, sold by Oura Health Oy, starts at $349, while the cheapest smartwatch from Apple Inc., the SE 3, is $249. Many of Google’s existing Fitbits cost over $100, while its Pixel Watch 4 is $349.
Oura rings and Whoop bands1 can both sync data with Apple’s HealthKit. Google is launching a new Health app2 that the company says can accept data from HealthKit:
You can connect third-party data sources to the Google Health app to keep your health and fitness data in one place. This allows you to track your progress across multiple platforms like Android Health Connect, Apple Health, and other third-party apps. Once your data is connected, you can see all your data in one place, and ask Google Health Coach questions about your fitness and health data.
Hopefully that sync is actually bidirectional.
The Fitbit Air, at its low price of $99, is certainly intriguing. Even if it syncs with HealthKit, I think a lot of iPhone users would look to Apple for a product like this. I hope it becomes popular to a point that Apple takes notice and builds the fitness band of my dreams.
- I’ve tried an Oura ring a couple of times over the years, but I just don’t like having things on my hands. Heck, my wedding band is a tattoo. ↩
- There is a basic free version, and an AI-infused Premium version. ↩
Anthropic Taking Over All Capacity of xAl’s First Memphis Data Center
Some afternoon news from an unsigned xAI press release:
SpaceXAI has signed an agreement with Anthropic to provide access to Colossus 1, one of the world’s largest and fastest-deployed AI supercomputers.
Built from the ground up in record time, Colossus delivers unprecedented scale for AI training, fine-tuning, inference, and high-performance computing workloads. Colossus 1 features over 220,000 NVIDIA GPUs, including dense deployments of H100, H200, and next-generation GB200 accelerators. The cluster delivers extreme parallel performance for large language models, multimodal systems, scientific simulations, and generative AI at frontier scale.
Anthropic plans to use this additional compute to directly improve capacity for Claude Pro and Claude Max subscribers.
Anthropic confirmed the news via an unsigned press release:
We’ve agreed to a partnership with SpaceX that will substantially increase our compute capacity. This, along with our other recent compute deals, means that we’ve been able to increase our usage limits for Claude Code and the Claude API.
…and included a detail that SpaceXAI1 left out:
We’ve signed an agreement with SpaceX to use all of the compute capacity at their Colossus 1 data center. This gives us access to more than 300 megawatts of new capacity (over 220,000 NVIDIA GPUs) within the month. This additional capacity will directly improve capacity for Claude Pro and Claude Max subscribers.
Let me repeat part of that for emphasis:
We’ve signed an agreement with SpaceX to use all of the compute capacity at their Colossus 1 data center.
As a native Memphian who has watched xAI pollute our air, make tools for creating horrific content, and go back on its word to local leaders, this is both infuriating and hilarious.
Colossus 2 is up and running and Musk says xAI no longer needs the first site.
Russell Brandom has more at Techcrunch:
In the short term, there’s an obvious logic at work. xAI’s existing products are mostly focused on Grok, which has seen plummeting usage since the image generation debacles earlier this year. If xAI’s data center buildout is that much more than what Grok needs to operate, partnering with Anthropic adds a lot of green to the balance sheet. This is especially useful as the company, now combined with SpaceX, speeds towards an IPO. More broadly, having Anthropic lined up as a customer makes it easier to believe that SpaceX’s orbital data center play might actually work.
But beyond the short-term benefit, the Anthropic partnership sends an unusual message about where Elon Musk’s priorities really lie. It suggests the company’s real business may be more about building data centers than training AI models.
I wonder if this is the reason the planned water treatment plant for the first site is now on hold. If xAI isn’t operating Colossus 1, the company may want out of it, despite previous comments.
Anthropic seems to be backed into a corner when it comes to capacity. I can’t imagine everyone there was super pumped to be associated with xAI, and the partnership will certainly cause some to sour on the company and its products.
I don’t know if Anthropic will have any real presence in Memphis. Some may see this as a chance for a bit of a reboot when it comes to the public relation issues xAI has faced here, but it I don’t have high hopes for that. Anthropic is merely leasing capacity, not taking over the data center outright.
Even if Colossus 1 were to change hands, the only meaningful thing Anthropic could do (from an environmental standpoint) would be to get the water treatment plant back on track. The site isn’t going to be able to move to cleaner energy anytime soon, no matter what chatbot is running on its servers.
Claude customers will see a benefit, and SpaceX’s books will look a little better, but that’s all the change I see coming with this news.
Original post updated to reflect the news that xAI had already moved to its Colossus 2 site.
- In case you missed it, SpaceX now owns xAI, which in turn owns the X social network. SpaceX seems to have been the only one of the three making any money. As a fan of cool rockets and a non-fan of X and xAI, this has stirred complicated feelings for me. ↩
Connected #601: I Love Wrists — A Tier List of Tim Cook Quotes ⇢
In honor of Tim Cook’s pending retirement, Federico, Myke, and Stephen rank some of his quotes from the last 15 years.
This was quite the trip down memory lane.
The Wonderful World of Artemis II Photos
Hank Green has made something really cool. Called the Artemis II Photo Timeline, it’s an interactive way to scroll through photos from NASA’s recent crewed mission to cislunar space — but pinned to NASA’s official schedule of the mission.
It is also a tribute to publicly available data. Though the timeline includes some videos published to Instagram and YouTube, the vast majority are images from Flickr. NASA usually uploads them with EXIF data intact, and Flickr preserves it. NASA also provided the mission schedule and, even better, has a public API for the position of the Orion spacecraft at any given time. Which means Green was also able to correlate the photos with where they were taken along the craft’s trajectory.
But why are these images on Flickr? Anil Dash explains:
Here’s the TL;DR:
- Flickr comes from (and helped start!) the Web 2.0 era, which was based on users having control over their data
- Tools at that time began giving creators the power to decide what license they wanted to release their content under, including permissions about how it could be shared, used, or remixed
- Because the people who made platforms back then were users and creators themselves, they thought about the long term and wanted to be able to preserve people’s work
- After lots of corporate shuffling, Flickr ended up in the hands of a family-owned company, SmugMug, and they made the Flickr Foundation to preserve public photos for the next 100 years
- NASA’s images should only be on a service where they can be stored in full resolution, for the long term, dedicated to the public domain — which the other social media apps of today can’t do
Did you know that which astronaut took which photo is not public? Hank Green explains:
A previous version of this site showed some data on which astronaut took which photo, but it was brought to my attention that the four astronauts together agreed that they did not want credit for any photos taken on the mission. I’m somewhat conflicted about this because this project is about giving as much context as possible, but of course there is also something very beautiful about not wanting to take individual credit for something that was the result of so much collaboration.
AI Psychosis Reaches the Executive Suite ⇢
An NBER study of nearly 6,000 CEOs and CFOs across the US, UK, Germany, and Australia found that roughly 90% of firms reported zero measurable impact on productivity or employment from AI over the past three years.
The average employee AI usage was 1.5 hours per week.
The average CEO AI usage was less than one hour per week.
Meanwhile, their companies are pouring money into the $690 billion AI infrastructure buildout that, according to Sequoia, needs $600 billion in annual revenue to justify itself (but currently generates maybe $50-100 billion).
Only one in five AI investments delivers any measurable ROI. Only one in 50 delivers transformational value. And 95% of enterprise AI pilots fail to escape the lab.
Sam Altman Apologizes to British Columbia Community, Wonders Who Could Have Stopped Such Violence
OpenAI, in a nameless blog post:
Mass shootings, threats against public officials, bombing attempts, and attacks on communities and individuals are an unacceptable and grave reality in today’s world. These incidents are a reminder of how real the threat of violence is—and how quickly violent intent can move from words to action.
People may also bring these moments and feelings into ChatGPT. They may ask questions about the news, try to understand what happened, express fear or anger, or talk about violence in ways that are fictional, historical, political, personal, or potentially dangerous. We work to train ChatGPT to recognize the difference—and to draw lines when a conversation starts to move toward threats, potential harm to others, or real-world planning.
We’re sharing what we do to minimize uses of our services in furtherance of violence or other harm: how our models are trained to respond safely, how our systems detect potential risk of harm, and what actions we take when someone violates our policies. We are constantly improving the steps we take to help protect people and communities, guided by input from psychologists, psychiatrists, civil liberties and law enforcement experts, and others who help us navigate difficult decisions around safety, privacy, and democratized access.
Maggie Harrison Dupré, writing at Futurism:
Reading it, someone with limited context would come away with the impression that the company was talking about concerns that were still theoretical: that it’s proactively trying to head off bad things that might happen.
That suggestion is bizarre, though, because the reality is that OpenAI’s flagship chatbot has already been linked to a wide range of real-world violence.
In fact, the most extraordinary thing that OpenAI neglected to mention was what almost certainly motivated the post in the first place: the company published the blog as news organizations — Futurism included — were reaching out to ask the company for comment on a new round of seven lawsuits it’s facing from the families of the victims of the February school massacre in Tumbler Ridge, British Columbia, which would be made public the next day.
Though the blog post made no mention of it, the Tumbler Ridge shooter was a ChatGPT user. Weeks after the tragedy rocked the rural town in February of this year, the Wall Street Journal revealed that back in June 2025, OpenAI’s automated moderation tools had flagged the shooter’s account for graphic descriptions of gun violence. Human reviewers were so alarmed that several pushed OpenAI leaders to alert local officials. Those leaders chose not to, and the company moved instead to deactivate that specific account; as OpenAI later admitted, though, the shooter simply opened a new account — a tactic that OpenAI’s customer service has been found encouraging users to do post-deactivation — and continued to use the service.
Last week, Sam Altman offered an apology to the Tumbler Ridge community, writing:
I want to express my deepest condolences to the entire community. No one should ever have to endure a tragedy like this. I cannot imagine anything worse in this world than losing a child. My heart remains with the victims, their families, all members of the community, and the province of British Columbia.
I am deeply sorry that we did not alert law enforcement to the account that was banned in June. While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.
I reaffirm the commitment I made to the Mayor and the Premier to find ways to prevent tragedies like this in the future. Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again.
Pedometer++ 8.0 ⇢
Today, we shipped a huge update to Pedometer++. We have full details over on the Pedometer++ blog, and David has a post up as well:
Today I’m beyond delighted to announce the release of Pedometer++ version 8. I worked with legendary designer Rafa Conde to re-design the appearance and layout of the watchOS app to make it the most capable, yet intuitive, walking app on the App Store.
Pedometer++ has been on the Apple Watch from day one twelve years ago. Over that time I’ve built dozens of designs and features, today’s redesign learns from that journey and arrives at an incredible place.
The all-new step counter is both familiar and modern:
![]()
Expedition Mode is a new way to extend your Apple Watch’s battery life when on longer walks, hikes, or runs by disabling constant heart rate tracking and instead relying only on the basic heart rate tracking the Apple Watch provides. Based on our long-term testing, you can expect up to a 40 percent improvement in battery life with Expedition Mode. It’s wild.
The rest of the watchOS app has been overhauled as well. The workout screens have been redesigned, and the new maps are great. Here’s David again:
If you’re a premium subscriber when you start a workout you’ll be immediately brought to your new maps screen which shows your workout on a live updating map. This map will overlay your planned route, if selected.
This screen now features our completely custom dark mode map. I worked with a cartographer to design a map which looks perfectly at home on the Apple Watch, which is highly legible even at arms length and includes all the topographic and wayfinding information you need to keep you on track.
I mean… come on:
![]()
Over on MacStories, John Voorhees wrote:
Apple is due for an Apple Watch renaissance. It’s a great device, but my use of it hasn’t changed a lot over the years. I track workouts, check notifications and the weather, and, well, check the time.
What Pedometer++ shows is that there’s untapped potential there. Even before WWDC, there’s more room to experiment and delight Apple Watch users than most developers are taking advantage of. I wouldn’t be surprised if David senses an opportunity on the horizon, too.
David has been working on parts of this update for years, and it really shows. We couldn’t be prouder of how it turned out. Pedometer++ 8.0 is in the App Store now.
CHATBOT Act Introduced in Senate ⇢
The U.S. Senate Committee on Commerce, Science, and Transportation, in a press release today:
U.S. Senate Commerce Committee Chairman Ted Cruz (R-Texas) and Senators Brian Schatz (D-Hawaii), John Curtis (R-Utah), and Adam Schiff (D-Calif.) today introduced the CHATBOT Act, legislation that would put parents, not Big Tech, in charge of how children and teens interact with AI chatbots.
While AI chatbots can support a child’s learning, research, and creativity, they also pose real risks to minors, including exposure to inappropriate content, language, and addictive features. Some AI companies have even deployed rewards, notifications, and targeted advertising to drive prolonged engagement by adolescent users.
The Children’s Health, Advancement, Trust, Boundaries, and Oversight in Technology Act, or CHATBOT Act, would require AI companies to establish “family accounts” for parents to manage access and usage of AI chatbots by their children. AI chatbots would limit manipulative design features; require parental consent for chatbot usage and parental controls to access and monitor a child’s conversations with a chatbot; and prohibit targeted advertising to children. In addition, the bill would direct further study on potential chatbot-related harms to children and best practices for parents.
(When your products are so unpopular and flawed that Ted Cruz and Adam Schiff agree that something should be done, you know it’s bad.)
Here’s a bit from the bill’s one-pager:
Reports have alleged that some AI chatbots have encouraged self-harm, fostered emotional dependency, and exposed minors to sexually explicit content. Research notes that chatbots may also pose developmental risks, such as weakening memory recall and ability to distinguish between human and non-human relationships. Those dangers can grow more acute during prolonged interactions. Some companies use rewards, nudges, and notifications that can keep children hooked on conversations. They may even exploit a child’s or teen’s data for targeted advertising and incentivize minors to spend money inside these systems.
In addition to questions about whether design choices have considered the wellbeing of children, parents should be empowered to limit harmful features, protect privacy, and guide how these systems interact with their children. Policymakers, educators, and families need greater insight into how these tools can be safely used by children while protecting mental health and social development.
The solutions proposed by the legislation aren’t bad, but they don’t go far enough. If usage limits and other safeguards have failed our young children when it comes to social media, these tools don’t stand a chance when it comes ChatGPT, Gemini, Claude, and others.
Legislation should not put all of the responsibility for safety on parents. AI companies need to be regulated, and their products need strict safeguards in place when they are used by children. This bill would forbid companies from using minors’ personal data for targeted advertising and require them to build some basic tools for parents, but it does very little to address the addictive and harmful aspects of these products.
If you have any doubt about how inept Congress is when it comes to technology, look no further than the file name for the full text of the bill:
C:\Users\LAN\AppData\Local\Temp\LAN26253.loc
Is that a dumb thing to point out? Obviously. Is this ACT better than nothing? Of course. Do I think AI companies will continue to do what they want, how they want? Yep.