Biomedical Engineering, Medicine, Public Health, Open Source, Structural Solutions
15971 stories
·
226 followers

The disturbing white paper Red Hat is trying to erase from the internet

1 Share

It shouldn’t be a surprise that companies – and for our field, technology companies specifically – working with the defense industry tends to raise eyebrows. With things like the genocide in Gaza, the threats of genocide and war crimes against Iran, the mass murder in Lebanon, it’s no surprise that western companies working with the militaries and defense companies involved in these atrocities are receiving some serious backlash.

With that in mind, it seems Red Hat, owned by IBM, is desperately trying to scrub a certain white paper from the internet. Titled “Compress the kill cycle with Red Hat Device Edge”, the 2024 white paper details how Red Hat’s products and technologies can make it easier and faster to, well, kill people. Links to the white paper throw up 404s now, but it can still easily be found on the Wayback Machine and other places.

It’s got some disturbingly euphemistic content.

The find, fix, track, target, engage, assess (F2T2EA) process requires ubiquitous access to data at the strategic, operational and tactical levels. Red Hat Device Edge embeds captured, analyzed, and federated data sets in a manner that positions the warfighter to use artificial intelligence and machine learning (AI/ML) to increase the accuracy of airborne targeting and mission-guidance systems.

[…]

Delivering near real-time data from sensor pods directly to airmen, accelerating the sensor-to-shooter cycle.

[…]

Sharing near real-time sensor fusion data with joint and multinational forces to increase awareness, survivability, and lethality.

[…]

The new software enabled the Stalker to deploy updated, AI-based automated target recognition capabilities.

[…]

If the target is an adversary tracked vehicle on the far side of a ridge, a UAS carrying a server running Red Hat Device Edge could transmit video and metadata directly to shooters.

↫ Red Hat white paper titled “Compress the kill cycle with Red Hat Device Edge”

I don’t think there’s something inherently wrong with working together with your nation’s military or defense companies, but that all hinges on what, exactly, said military is doing and how those defense companies’ products are being used. The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense (e.g. helping Ukraine fight off the Russian invasion).

There’s always going to be difficult grey areas, but any military or defense company supporting the genocide in Gaza or supplying weapons to kill women and children in Iran is unequivocally wrong, morally reprehensible, and downright illegal on both an international and national level. It clearly seems someone at Red Hat feels the same way, as the company has been trying really hard to memory-hole this particular white paper, and considering its word choices and the state of the world today, it’s easy to see why.

Of course, the internet never forgets, and I certainly don’t intend to let something like this slide. We all know companies like Microsoft, Oracle, and Google have no qualms about making a few bucks from a genocide or two, but it always feels a bit more traitorous to the cause when it’s an open source company doing the profiting. It feels like Red Hat is trying to have its cake and eat it too, by, as an IBM subsidiary, trying to both profit from the vast sums of money sloshing around in the US military industrial complex as well as maintain its image as a scrappy open source business success story shitting bunnies and rainbows.

It’s a long time ago now that Red Hat felt like a genuine part of the open source community. Most of us – both outside and inside of Red Hat, I’m sure – have been well aware for a long time now that those days are well behind us, and I guess Red Hat doesn’t like seeing its kill cycle this compressed.

Read the whole story
satadru
19 hours ago
reply
New York, NY
Share this story
Delete

★ Let Us Learn to Show Our Friendship for a Man When He Is Alive and Not After He Is Dead

1 Share

For The New Yorker, Ronan Farrow and Andrew Marantz go deep profiling Sam Altman under the mince-no-words headline “Sam Altman May Control Our Future — Can He Be Trusted?” 16,000+ words — roughly one-third the length of The Great Gatsby — very specifically investigating Altman’s trustworthiness, particularly the details surrounding his still-hard-to-believe ouster by the OpenAI board in late 2023, only to return within a week and purge the board. The piece is long, yes, but very much worth your attention — it is both meticulously researched and sourced, and simply enjoyable to read. Altman, to his credit, was a cooperative subject, offering Farrow and Marantz numerous interviews during an investigation that Farrow says took over a year and half.

A few excerpts and comments (not in the same order they appear in the story):

1.

Yet most of the people we spoke to shared the judgment of Sutskever and Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”

The board member was not the only person who, unprompted, used the word “sociopathic.” One of Altman’s batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. “You need to understand that Sam can never be trusted,” he told one. “He is a sociopath. He would do anything.”

A recurring theme in the piece is that colleagues who’ve worked with Altman the closest trust him the least. This bit about Aaron Swartz warning friends that Altman is a “sociopath” who “can never be trusted” is, to my knowledge, new reporting. Swartz’s opinion carries significant weight with me.1 Swartz is lionized (rightly) for his tremendous strengths, and the profoundly tragic circumstances of his martyrdom have resulted in less focus on his weaknesses. But I knew him fairly well and he led a very public life, and I’m unaware of anyone claiming he ever lied. Exaggerated? Sure. Lied? I think never.

Another central premise of the story is that while it’s axiomatic that one should want honest, trustworthy, scrupulous people in positions of leadership at any company, the nature of frontier AI models demands that the organizations developing them be led by people of extraordinary integrity. The article, to my reading, draws no firm conclusion — produces no smoking gun, as it were — regarding whether Sam Altman is generally honest/truthworthy/scrupulous. But I think it’s unambiguous that he’s not a man of great integrity.

2.

Regarding Fidji Simo, OpenAI’s other “CEO”:

Several executives connected to OpenAI have expressed ongoing reservations about Altman’s leadership and floated Fidji Simo, who was formerly the C.E.O. of Instacart and now serves as OpenAI’s C.E.O. for AGI Deployment, as a successor. Simo herself has privately said that she believes Altman may eventually step down, a person briefed on a recent discussion told us. (Simo disputes this. Instacart recently reached a settlement with the F.T.C., in which it admitted no wrongdoing but agreed to pay a sixty-million-dollar fine for alleged deceptive practices under Simo’s leadership.)

This paragraph is juicy in and of itself, with its suggestions of palace intrigue. But it’s all the more interesting in light of the fact that, post-publication of the New Yorker piece, Fidji Simo has taken an open-ended medical leave from OpenAI. If we run with the theory that Altman is untrustworthy (the entire thesis of Farrow and Marantz’s story), and that Simo is also untrustworthy (based on the fraudulent scams she ran while CEO of Instacart, along with her running the Facebook app at Meta before that), we’d be foolish not to at least consider the possibility that her medical leave is a cover story for Altman squeezing Simo out after catching on to her angling to replace him atop OpenAI. The last thing OpenAI needs is more leadership dirty laundry aired in public, so, rather than fire her, maybe Altman let her leave gracefully under the guise of a relapse of her POTS symptoms?

Simo’s LinkedIn profile lists her in two active roles: CEO of “AGI deployment” at OpenAI, and co-founder of ChronicleBio (“building the largest biological data platform to power AI-driven therapies for complex chronic conditions”). If my spitball theory is right, she’ll announce in a few months that after recuperating from her POTS relapse, the experience has left her seeing the urgent need to direct her energy at ChronicleBio. Or perhaps my theory is all wet, and Simo and Altman have a sound partnership founded on genuine trust, and she’ll soon be back in the saddle at OpenAI overseeing the deployment of AGI (which, to be clear, doesn’t yet exist2). But regardless of whether the Altman-Simo relationship remains cemented or is in the midst of dissolving, it raises serious questions why — if Altman is a man of integrity who believes that OpenAI is a company whose nature demands leaders of especially high integrity — he would hire the Instacart CEO who spearheaded bait-and-switch consumer scams that all came right out of the playbook for unscrupulous car salesmen.

3.

Regarding Altman’s stint as CEO at Y Combinator, and his eventual, somewhat ambiguous, departure, Farrow and Marantz write:

By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached [Y Combinator founder Paul] Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice. Altman told some Y.C. partners that he would resign as president but become chairman instead. In May, 2019, a blog post announcing that Y.C. had a new president came with an asterisk: “Sam is transitioning to Chairman of YC.” A few months later, the post was edited to read “Sam Altman stepped away from any formal position at YC”; after that, the phrase was removed entirely. Nevertheless, as recently as 2021, a Securities and Exchange Commission filing listed Altman as the chairman of Y Combinator. (Altman says that he wasn’t aware of this until much later.)

Altman has maintained over the years, both in public and in recent depositions, that he was never fired from Y.C., and he told us that he did not resist leaving. Graham has tweeted that “we didn’t want him to leave, just to choose” between Y.C. and OpenAI. In a statement, Graham told us, “We didn’t have the legal power to fire anyone. All we could do was apply moral pressure.” In private, though, he has been unambiguous that Altman was removed because of Y.C. partners’ mistrust. This account of Altman’s time at Y Combinator is based on discussions with several Y.C. founders and partners, in addition to contemporaneous materials, all of which indicate that the parting was not entirely mutual. On one occasion, Graham told Y.C. colleagues that, prior to his removal, “Sam had been lying to us all the time.”

Graham responded to this on Twitter/X thus:

Since there’s yet another article claiming that we “removed” Sam because partners distrusted him, no, we didn’t. It’s not because I want to defend Sam that I keep insisting on this. It’s because it’s so annoying to read false accounts of my own actions.

Which tweet includes a link to a 2024 tweet containing the full statement Farrow and Marantz reference, which reads:

People have been claiming YC fired Sam Altman. That’s not true. Here’s what actually happened. For several years he was running both YC and OpenAI, but when OpenAI announced that it was going to have a for-profit subsidiary and that Sam was going to be the CEO, we (specifically Jessica) told him that if he was going to work full-time on OpenAI, we should find someone else to run YC, and he agreed. If he’d said that he was going to find someone else to be CEO of OpenAI so that he could focus 100% on YC, we’d have been fine with that too. We didn’t want him to leave, just to choose one or the other.

Graham is standing behind Altman publicly, but I don’t think The New Yorker piece mischaracterized his 2024 statement about Altman’s departure from Y Combinator. Regarding the quote sourced to anonymous “Y.C. colleagues” that he told them “Sam had been lying to us all the time”, Graham tweeted:

I remember having a conversation after Sam resigned with a YC partner who said he and some other partners had been unhappy with how Sam had been running YC. I told him Sam had told us that all the partners were happy, so he was either out of touch or lying to us.

And, emphasizing that this remark was specifically in the context of how happy Y Combinator’s partners were under Altman’s leadership of YC, Graham tweets:

Every YC president tends to tell us the partners are happy. Sam’s successor did too, and he was mistaken too. Saying the partners are unhappy amounts to saying you’re doing a bad job, and no one wants to admit or even see that.

Seems obvious in retrospect, but we’ve now learned we should ask the partners themselves. (And they are indeed now happy.)

I would characterize Graham’s tweets re: Altman this week as emphasizing only that Altman was not fired or otherwise forced from YC, and could have stayed as CEO at YC if he’d found another CEO for OpenAI. But for all of Graham’s elucidating engagement on Twitter/X this week regarding this story, he’s dancing around the core question of the Farrow/Marantz investigation, the one right there in The New Yorker’s headline: Can Sam Altman be trusted? “We didn’t ‘remove’ Sam Altman” and “We didn’t want him to leave” are not the same things as saying, say, “I think Sam Altman is honest and trustworthy” or “Sam Altman is a man of integrity”. If Paul Graham were to say such things, clearly and unambiguously, those remarks would carry tremendous weight. But — rather conspicuously to my eyes — he’s not saying such things.

4.

From the second half of the same paragraph quoted above, that started with Aaron Swartz’s warnings about Altman:

Multiple senior executives at Microsoft said that, despite Nadella’s long-standing loyalty, the company’s relationship with Altman has become fraught. “He has misrepresented, distorted, renegotiated, reneged on agreements,” one said. Earlier this year, OpenAI reaffirmed Microsoft as the exclusive cloud provider for its “stateless” — or memoryless — models. That day, it announced a fifty-billion-dollar deal making Amazon the exclusive reseller of its enterprise platform for A.I. agents. While reselling is permitted, Microsoft executives argue OpenAI’s plan could collide with Microsoft’s exclusivity. (OpenAI maintains that the Amazon deal will not violate the earlier contract; a Microsoft representative said the company is “confident that OpenAI understands and respects” its legal obligations.) The senior executive at Microsoft said, of Altman, “I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.”

The most successful scams — the ones that last longest and grow largest — are ones with an actual product at the heart. Scams with no actual there there go bust quickly. The Bankman-Fried FTX scandal blew up quickly because FTX never offered anything of actual value. Bernie Madoff, though, had a long career, because much of his firm’s business was legitimate. It wasn’t only the Ponzi scheme, which is what enabled Madoff to keep the Ponzi scheme going for two decades.

But the better comparison to OpenAI — if that “small but real chance” comes true — might be Enron. Enron was a real company that built and owned a very real pipeline and energy infrastructure business. ChatGPT and Codex are very real, very impressive technologies. Enron’s operations were real, but the story they told to investors was a sham. OpenAI’s technology is undeniably real and blazing the frontier of AI. It’s the financial story Altman has structured that seems alarmingly circular.


  1. In a 2005 Y Combinator “class photo”, Altman and Swartz are standing next to each other. Despite the fact that Altman was sporting a reasonable number of popped polo collars (zero), Swartz was clearly the better-dressed of the two.* ↩︎
    * Aaron would’ve loved this footnote. Christ, I miss him.

  2. With rare exceptions, I continue to think it’s a sign of deep C-suite dysfunction when a company has multiple “CEOs”. When it actually works — like at Netflix, with co-CEOs Ted Sarandos and Greg Peters (and previously, Sarandos and Reed Hastings before Hastings’s retirement in 2023) — the co-CEOs are genuine partners, and neither reports to the other. There is generally only one director of a movie, but there are exceptions, who are frequently siblings (e.g. the Coens, the Wachowskis, the Russos). A football team only has one head coach. The defensive coordinator is the “defensive coordinator”, not the “head coach of defense”. It’s obvious that Fidji Simo reports to Sam Altman, and thus isn’t the “CEO” of anything at OpenAI. But OpenAI does have applications, and surely is creating more of them, so being in charge of applications is being in charge of something real. By any reasonable definition, AGI has not yet been achieved, and many top AI experts continue to question whether LLM technology will ever result in AGI. So Simo changing her title to (or Altman changing her title to) “CEO of AGI deployment” is akin to changing her title to “CEO of ghost busting” in terms of its literal practical responsibility. ↩︎︎

Read the whole story
satadru
19 hours ago
reply
New York, NY
Share this story
Delete

4th Circuit Upholds West Virginia's Compulsory Vaccination Law That Excludes Religious Exemptions

1 Comment and 2 Shares

In Perry v. Marteney, (4th Cir., April 8, 2026), the U.S. 4th Circuit Court of Appeals, in a 2-1 decision, held that West Virginia's law that requires children attending school in the state to be vaccinated against a number of infectious diseases may be constitutionally applied to a student attending the state's online public school over the religious objections of the student's parents. West Virginia allows medical exemptions from the vaccination requirement but does not permit religious exemptions. The court rejected the parents' claim that the compulsory vaccination law is not "generally applicable", and thus must satisfy the strict scrutiny test, and also suggested that it does satisfy strict scrutiny. The majority said in part:

... [A] state’s interest in vaccinating its citizens and protecting its school children has long been recognized as of the utmost importance.... This is not just some ho-hum, every day “compelling interest.” Even under the strictest scrutiny, courts should not annul and eviscerate this fundamental state concern merely because a challenged law in some respect falls short of some perceived perfection. And much less is required of neutral and generally applicable laws....

West Virginia’s compulsory vaccination law does not provide a mechanism for granting individualized exemptions. State officials do not have any discretion “to decide which reasons” for refusing vaccination “are worthy of solicitude.”... The law recognizes only one kind of exemption—medical exemptions—and clearly articulates the circumstances in which state officials can grant them....

The Perrys first argue that West Virginia’s compulsory vaccination law is not generally applicable for another reason: it does not apply to other groups that pose a similar hazard to public health....  [T]he vaccine mandate does not apply to: (1) children educated outside of the school system (i.e., educated at home, in learning pods, or in microschools); (2) adults working in schools; or (3) children attending school who have been granted a medical exemption. 

It is certainly true that West Virginia’s vaccine mandate could sweep more broadly than it does. But a law does not lack general applicability merely because it makes classifications.... Classifications only pose a constitutional concern if they treat “comparable secular activity more favorably than religious exercise.” 

... [T]he Perrys do not allege that K.P.’s desire to attend the Virtual Academy is religiously motivated, so this is merely an instance of West Virginia treating some secular activity more favorably than other secular activity....

The burden imposed by West Virginia’s compulsory vaccination law is not remotely “of the same character” as those imposed in Yoder and Mahmoud. ... The law is a public health measure, not an instrument of ideological indoctrination. It does not expose children to values or beliefs that might be hostile to their parents’ religious beliefs. It does not require that school instruction extoll the virtues of vaccines. All the law requires is that, in the interest of protecting others, children get themselves vaccinated before attending school. The need for some to protect the health and well-being of all was not present in Yoder or Mahmoud.

Judge Neimeyer dissented, saying in part:

The injunction entered here [by the district court] hardly affects West Virginia’s compelling interest in preventing the spread of infectious disease, as the injunction treats virtual students the same as other West Virginia students not physically attending a school while, at the same time, preserving the Perrys’ free exercise rights....

To be sure, West Virginia absolutely has a compelling state interest to prevent the spread of infectious disease in order to protect the health and safety of the public, as the district court acknowledged and the majority emphasizes.  But the School Officials have failed to show that the law’s failure to make an exception for virtual students with a sincere religious objection to complying with the mandatory vaccination law is consistent with narrow tailoring when students similarly situated with regard to the risk addressed need not comply at all....

Read the whole story
satadru
3 days ago
reply
There's a reason that WV & MS used to have the highest vaccination rates in the US...
New York, NY
Share this story
Delete

Hobbes vs Anarchism

1 Comment
PERSON:
Read the whole story
satadru
3 days ago
reply
Calvinball & the Social Contract.
New York, NY
Share this story
Delete

The Summoning of Bertrand Russell

1 Comment
PERSON:
Read the whole story
satadru
3 days ago
reply
Evergreen advice...
New York, NY
Share this story
Delete

Intelligent Life of Earth

2 Shares
PERSON:
Read the whole story
satadru
3 days ago
reply
New York, NY
Share this story
Delete
Next Page of Stories