Biomedical Engineering, Medicine, Public Health, Open Source, Structural Solutions
16000 stories
·
226 followers

‘Noir, Japan’s Hard-Boiled Bittersweet Answer to Oreos’

2 Comments

Jake Adelstein (author of Tokyo Vice) on his blog Tokyo Paladin:

For decades, Japan’s Oreos weren’t made by Nabisco at all. They were produced domestically by Yamazaki Biscuits, under a licensing arrangement with what eventually became Mondelez International. This was, by most accounts, a reasonable arrangement. The cookies were local. The quality was consistent. Nobody was complaining.

Then Mondelez did what corporations do when things are working fine. The license expired, and Mondelez moved production of the Oreos it sells in Japan to China, exporting them to Japanese wholesalers and retailers. A cost decision. A spreadsheet decision. The kind of decision made in a room with no windows and a very good projector.

Sensitive Japanese consumers noticed quickly — the taste had changed. Into that opening stepped the Noir, inheriting the flavor the old Oreo had left behind.

Yamazaki Biscuits launched Noir in December 2017 as the successor nobody had officially asked for and everybody apparently wanted.

I have a great affinity for Newman-O’s, which I’ve previously described as “the cookies Oreos pretend to be”. Turns out though I’ve mostly sung the praises of Newman-O’s on my podcast and social media, not here on Daring Fireball. I love Newman-O’s, never tire of them, and will fight any man who argues that Oreos taste better. In fact, late last night, when a friend texted me with a link to this story from Adelstein, I was by sheer happenstance eating a few Newman-O’s. True story.

But now I’m fascinated by the existence of these Japanese rivals. A spite Oreo called Noir. They look and sound delicious, but they seem difficult to obtain in the U.S.

Link: tokyopaladin.substack.com/p/the-japanese-oreo-noir-kills…

Read the whole story
kazriko
10 hours ago
reply
Noireos
Colorado Plateau
satadru
10 hours ago
reply
“Spite Oreos” is a great phrase.
New York, NY
Share this story
Delete

Adobe’s ‘Modern’ User Interface Is Just Webpages

1 Share

Nick Heer:

If you do a little poking around in Adobe’s application bundles, a key reason for the jankiness of these user interfaces becomes apparent: it is because they are little webpages. These dialog boxes are HTML files that reference a chunky CSS file and oodles of JavaScript, and appear to be built with React. [...]

I was going to write about how this stuff should have been tried with people who actually use Adobe’s apps in a high-pressure environment, but I am sure it was and, also, it does not matter. Wichary has it right. These are fundamental principles of user interface design that Adobe is ignoring because its internal tooling has taken precedence.

I will quibble only with this line from Heer’s post:

Also, Adobe’s interface has always been unique and not quite at home on either MacOS or Windows.

You have to go back to the 1990s and classic Mac OS, but Adobe’s best apps used to have exemplary native UIs. Apps like Photoshop helped push the state of the art in Mac UI forward. Tabbed palettes were a revelation. Fire up, say, Photoshop 3.0 on MacOS 7.6 and see what I mean.

Also worth noting is how much this new “modern” UI isn’t just subjectively ugly, it’s objectively breaking the habits and expectations of users with literally decades of experience with Photoshop — users who, like me, remember when Adobe’s UI wasn’t just merely tolerable but actually good. It’s insane when you think about it.

How did Adobe lose that good sense of yore? Two ways. Gradually, then suddenly.

Link: pxlnv.com/linklog/adobe-modern-user-interface/

Read the whole story
satadru
11 hours ago
reply
New York, NY
Share this story
Delete

The text mode lie: why modern TUIs are a nightmare for accessibility

1 Comment and 2 Shares

There is a persistent misconception among sighted developers: if an application runs in a terminal, it is inherently accessible. The logic assumes that because there are no graphics, no complex DOM, and no WebGL canvases, the content is just raw ASCII text that a screen reader can easily parse.

The reality is different. Most modern Text User Interfaces (TUIs) are often more hostile to accessibility than poorly coded graphical interfaces. The very tools designed to improve the Developer Experience (DX) in the terminal—frameworks like Ink (JS/React), Bubble Tea (Go), or tcell—are actively destroying the experience for blind users.

↫ Casey Reeves

The core reason should be obvious: the command-line interface, at its core, is just a stream of data with the newest data at the bottom, linearly going back in time as you go up. Any screen reader can deal with this fairly easily, and while I personally have no need for such a tool, I’ve heard from those that do that kernel-level screen readers are quite good at what they do. TUIs, or text-based user interfaces, made with modern frameworks are actually very different: they’re “2D grid[s] of pixels, where every character cell is a pixel. [They] abandons the temporal flow for a spatial layout.”

It should become immediately obvious that screen readers won’t really know what to do with this, and Reeves gives countless examples, but the short version is this: the cursor jumps all over the place with every screen update, which makes screen readers go nuts. Various older TUIs, made in a time well before these modern TUI frameworks came about, were designed in a much more terminal-friendly way, or give you options to hide the cursor to solve the problem that way. Irssi, for example, uses VT100 scrolling regions instead of redrawing the whole screen every time something changes.

I had never really stopped to think about TUIs and screen readers, as is common among us sighted people. The problems Reeves describes seem to stem not so much from TUIs being inherently inaccessible, but from modern frameworks not actually making use of the terminal’s core feature set. I really hope this Reeves’ article shines a light on this problem, and that the people developing these modern TUIs start taking accessibility more seriously.

Read the whole story
kazriko
10 hours ago
reply
This is why they're tui's and not cli. cli is what they're looking for here for accessibility. Of course, the new TUIs are much more like DOS text user interfaces.
Colorado Plateau
satadru
1 day ago
reply
New York, NY
Share this story
Delete

ENIAC’s Architects Wove Stories Through Computing

1 Comment


This year marks the 80th anniversary of ENIAC, the first general-purpose digital computer. The computer was built during World War II to speed up ballistics calculations, but its contributions to computing extend well beyond military applications.

Two of ENIAC’s key architects—John W. Mauchly, its co-inventor, and Kathleen “Kay” McNulty, one of the six original programmers—married a few years after its completion and raised seven children together. Mauchly and McNulty’s grandchild Naomi Most delivered a talk as part of a celebration in honor of ENIAC’s anniversary on 15 February, which was held online and in-person at the American Helicopter Museum in West Chester, Pa. The following is adapted from that presentation.

There was a library at my grandparents’ farmhouse that felt like it went on forever. September light through the windows, beech leaves rustling outside on the stone porch, the sounds of cousins and aunts and uncles somewhere in the house. And in the corner of that library, an IBM personal computer.

When I spent summers there as a child, I didn’t yet know that the computer was closely tied to my family’s story.

My grandparents are known for their contributions to creating the Electronic Numerical Integrator and Computer, or ENIAC. But both were interested in more than just crunching numbers: My grandfather wanted to predict the weather. My grandmother wanted to be a good storyteller.

In Irish, the first language my grandmother Kathleen “Kay” McNulty ever spoke, a word existed to describe both of these impulses: ríomh.

I began to learn the Irish language myself five years ago, and I was struck by how certain words and phrases had multiple meanings. According to renowned Irish cultural historian Manchán Magan—from whom I took lessons—the word ríomh has at different times been used to mean to compute, but also to weave, to narrate, or to compose a poem. That one word that can tell the story of ENIAC, a machine with wires woven like thread that was built to compute, make predictions, and search for a signal in the noise.

John Mauchly’s Weather-Prediction Ambitions

Before working on ENIAC, John Mauchly spent years collecting rainfall data across the United States. His favorite pastime was meteorology, and he wanted to find patterns in storm systems to predict the weather.

The Army, however, funded ENIAC to make simpler predictions: calculating ballistic trajectory tables. Start there, co-inventors J. Presper Eckert and Mauchly realized, and perhaps the weather would soon be computable.

Black and white 1960s image of two white men in suits looking at a wall of computer controls. Co-inventors John Mauchly [left] and J. Presper Eckert look at a portion of ENIAC on 25 November 1966. Hulton Archive/Getty Images

Weather is a system unfolding through time, and a model of a storm is a story about how that system might unfold. There’s an old Irish saying related to this idea: Is maith an scéalaí an aimsir. Literally, “weather is a good storyteller.” But aimsir also means time. So the usual translation of this phrase into English becomes “time will tell.”

Mauchly wanted to ríomh an aimsire—to weave the weather into pattern, to compute the storm, to narrate the chaos. He realized that complex systems don’t reveal their full purpose at conception. They reveal it through aimsir—through weather, through time, through use.

ENIAC’s First Programmers Were Weavers

Kathleen “Kay” McNulty was born on 12 February 1921, in Creeslough, Ireland, on the night her father—an IRA training officer—was arrested and imprisoned in Derry Gaol.

Family oral history holds that her people were weavers. She spoke only Irish until her family reached Philadelphia when she was 4 years old, entering American school the following year knowing virtually no English. She graduated in 1942 from Chestnut Hill College with a mathematics degree, was recruited to compute artillery firing tables by hand for the U.S. Army, and was then selected—along with five other women—to program ENIAC.

They had no manual. They had only blueprints.

McNulty and her colleagues learned ENIAC and its quirks the way you learn a loom: by touch, by memory, by routing threads of electricity into patterns. They developed embodied knowledge the designers could only approximate. They could narrow a malfunction to a specific failed vacuum tube before any technician could locate it.

McNulty and Mauchly are also credited with conceiving the subroutine, the sequence of instructions that can be repeatedly recalled to perform a task, now essential in any programming. The subroutine was not in ENIAC’s blueprints, nor in the funding proposal. The concept emerged as highly determined people extended their imagination into the machine’s affordances.

The engineers designed the loom. Weavers discovered its true capabilities.

In 1950, four years after ENIAC was switched on, Mauchly’s dream was realized as it was used in the world’s first computer-assisted weather forecast. That was made possible after Klara von Neumann and Nick Metropolis reassembled and upgraded the ENIAC with a small amount of digital program memory. The programmers who transformed the math into operational code for the ENIAC were Norma Gilbarg, Ellen-Kristine Eliassen, and Margaret Smagorinsky. Their names are not as well-known as they should be.

Black and white 1940s image of three women operating a differential analyser in a basement. Before programming ENIAC, Kay McNulty [left] was recruited by the U.S. Army to compute artillery firing tables. Here, she and two other women, Alyse Snyder [center] and Sis Stump, operate a mechanical analog computer designed to solve differential equations in the basement of the University of Pennsylvania’s Moore School of Electrical Engineering.University of Pennsylvania

Kay McNulty, Family Storyteller

Kay married John Mauchly in 1948, describing him as “the greatest delight of my life. He was so intelligent and had so many ideas.... He was not only lovable, he was loving.” She spent the rest of her life ensuring he, Eckert, and the ENIAC programmers would be recognized.

When she died in 2006, I came to her funeral in shock, not fully knowing what I’d lost. As she drifted away, it was said, she had been reciting her prayers in Irish. This understanding made it quickly over to Creeslough, in County Donegal, and awaited me when I visited to honor her memory with the dedication of a plaque right there in the center of town.

In her own memoir, she wrote: “If I am remembered at all, I would like to be remembered as my family storyteller.”

In Irish, the word for computer is ríomhaire. One who ríomhs. One who weaves, computes, and tells. My grandfather wanted to tell the story of the weather through computing. My grandmother wanted to be remembered as a storyteller. The language of her childhood already had a word that contained both of those ambitions.

Computers as Narrative Engines

When it was built, ENIAC looked like the back room of a textile production house. Panels. Switchboards. A room full of wires. Thread.

Thread does not tell you what it will become. We tend to think of computing as calculation—discrete and deterministic. But a model is a structured story about how something behaves.

Weather models, ballistic tables, economic forecasts, neural networks: These are all narrative engines, systems that take raw inputs and produce accounts of how the world might unfold. In complex systems, when parts are woven together through use, new structures arise that no one specified in advance.

Like ENIAC, the machines we are building now—the large models, the autonomous systems—are not merely calculators. They are looms.

Their most important properties will not be specified in advance. They will emerge through use, through the people who learn how to weave with them.

Through imagination.

Through aimsir.

Read the whole story
satadru
5 days ago
reply
♥️
New York, NY
Share this story
Delete

Why AI Systems Fail Quietly

1 Share


In late-stage testing of a distributed AI platform, engineers sometimes encounter a perplexing situation: Every monitoring dashboard reads “healthy,” yet users report that the system’s decisions are slowly becoming wrong.

Engineers are trained to recognize failure in familiar ways: a service crashes, a sensor stops responding, a constraint violation triggers a shutdown. Something breaks, and the system tells you. But a growing class of software failures looks very different. The system keeps running, logs appear normal, and monitoring dashboards stay green. Yet the system’s behavior quietly drifts away from what it was designed to do.

This pattern is becoming more common as autonomy spreads across software systems. Quiet failure is emerging as one of the defining engineering challenges of autonomous systems because correctness now depends on coordination, timing, and feedback across entire systems.

When Systems Fail Without Breaking

Consider a hypothetical enterprise AI assistant designed to summarize regulatory updates for financial analysts. The system retrieves documents from internal repositories, synthesizes them using a language model, and distributes summaries across internal channels.

Technically, everything works. The system retrieves valid documents, generates coherent summaries, and delivers them without issue.

But over time, something slips. Maybe an updated document repository isn’t added to the retrieval pipeline. The assistant keeps producing summaries that are coherent and internally consistent, but they’re increasingly based on obsolete information. Nothing crashes, no alerts fire, every component behaves as designed. The problem is that the overall result is wrong.

From the outside, the system looks operational. From the perspective of the organization relying on it, the system is quietly failing.

The Limits of Traditional Observability

One reason quiet failures are difficult to detect is that traditional systems measure the wrong signals. Operational dashboards track uptime, latency, and error rates, the core elements of modern observability. These metrics are well-suited for transactional applications where requests are processed independently, and correctness can often be verified immediately.

Autonomous systems behave differently. Many AI-driven systems operate through continuous reasoning loops, where each decision influences subsequent actions. Correctness emerges not from a single computation but from sequences of interactions across components and over time. A retrieval system may return contextually inappropriate and technically valid information. A planning agent may generate steps that are locally reasonable but globally unsafe. A distributed decision system may execute correct actions in the wrong order.

None of these conditions necessarily produces errors. From the perspective of conventional observability, the system appears healthy. From the perspective of its intended purpose, it may already be failing.

Why Autonomy Changes Failure

The deeper issue is architectural. Traditional software systems were built around discrete operations: a request arrives, the system processes it, and the result is returned. Control is episodic and externally initiated by a user, scheduler, or external trigger.

Autonomous systems change that structure. Instead of responding to individual requests, they observe, reason, and act continuously. AI agents maintain context across interactions. Infrastructure systems adjust resources in real time. Automated workflows trigger additional actions without human input.

In these systems, correctness depends less on whether any single component works and more on coordination across time.

Distributed-systems engineers have long wrestled with issues of coordination. But this is coordination of a new kind. It’s no longer about things like keeping data consistent across services. It’s about ensuring that a stream of decisions—made by models, reasoning engines, planning algorithms, and tools, all operating with partial context—adds up to the right outcome.

A modern AI system may evaluate thousands of signals, generate candidate actions, and execute them across a distributed infrastructure. Each action changes the environment in which the next decision is made. Under these conditions, small mistakes can compound. A step that is locally reasonable can still push the system further off course.

Engineers are beginning to confront what might be called behavioral reliability: whether an autonomous system’s actions remain aligned with its intended purpose over time.

The Missing Layer: Behavioral Control

When organizations encounter quiet failures, the initial instinct is to improve monitoring: deeper logs, better tracing, more analytics. Observability is essential, but it only shows that the behavior has already diverged—it doesn’t correct it.

Quiet failures require something different: the ability to shape system behavior while it is still unfolding. In other words, autonomous systems increasingly need control architectures, not just monitoring.

Engineers in industrial domains have long relied on supervisory control systems. These are software layers that continuously evaluate a system’s status and intervene when behavior drifts outside safe bounds. Aircraft flight-control systems, power-grid operations, and large manufacturing plants all rely on such supervisory loops. Software systems historically avoided them because most applications didn’t need them. Autonomous systems increasingly do.

Behavioral monitoring in AI systems focuses on whether actions remain aligned with intended purpose, not just whether components are functioning. Instead of relying only on metrics such as latency or error rates, engineers look for signs of behavior drift: shifts in outputs, inconsistent handling of similar inputs, or changes in how multistep tasks are carried out. An AI assistant that begins citing outdated sources, or an automated system that takes corrective actions more often than expected, may signal that the system is no longer using the right information to make decisions. In practice, this means tracking outcomes and patterns of behavior over time.

Supervisory control builds on these signals by intervening while the system is running. A supervisory layer checks whether ongoing actions remain within acceptable bounds and can respond by delaying or blocking actions, limiting the system to safer operating modes, or routing decisions for review. In more advanced setups, it can adjust behavior in real time—for example, by restricting data access, tightening constraints on outputs, or requiring extra confirmation for high-impact actions.

Together, these approaches turn reliability into an active process. Systems don’t just run, they are continuously checked and steered. Quiet failures may still occur, but they can be detected earlier and corrected while the system is operating.

A Shift in Engineering Thinking

Preventing quiet failures requires a shift in how engineers think about reliability: from ensuring components work correctly to ensuring system behavior stays aligned over time. Rather than assuming that correct behavior will emerge automatically from component design, engineers must increasingly treat behavior as something that needs active supervision.

As AI systems become more autonomous, this shift will likely spread across many domains of computing, including cloud infrastructure, robotics, and large-scale decision systems. The hardest engineering challenge may no longer be building systems that work, but ensuring that they continue to do the right thing over time.

Read the whole story
satadru
5 days ago
reply
New York, NY
Share this story
Delete

Chip Can Project Video the Size of a Grain of Sand

1 Share


By many estimates, quantum computers will need millions of qubits to realize their potential applications in cybersecurity, drug development, and other industries. The problem is, anyone who has wanted to simultaneously control millions of a certain kind of qubit has run into the problem of trying to control millions of laser beams.

That’s exactly the challenge that was faced by scientists working on the MITRE Quantum Moonshot project, which brought together scientists from MITRE, MIT, the University of Colorado at Boulder, and Sandia National Laboratories. The solution they developed came in the form of an image projection technology that they realized could also be the fix for a host of other challenges in augmented reality, biomedical imaging, and elsewhere. The device is a 1-square-millimeter photonic chip capable of projecting the Mona Lisa onto an area smaller than the size of two human egg cells.

“When we started, we certainly never would have anticipated that we would be making a technology that might revolutionize imaging,” says Matt Eichenfield, one of the leaders of the Quantum Moonshot project, a collaborative research effort focused on developing a scalable, diamond-based quantum computer, and a professor of quantum engineering at the University of Colorado at Boulder. Each second, their chip is capable of projecting 68.6 million individual spots of light—called scannable pixels—to differentiate them from physical pixels. That’s more than 50 times the capability of previous technology, such as micro-electromechanical systems (MEMS) micromirror arrays.

“We have now made a scannable pixel that is at the absolute limit of what diffraction allows,” says Henry Wen, a visiting researcher at MIT and a photonics engineer at QuEra Computing.

The chip’s distinguishing feature is an array of tiny microscale cantilevers, which curve away from the plane of the chip in response to voltage and act as miniature “ski jumps” for light. Light is channeled along the length of each cantilever via a waveguide and exits at its tip. The cantilevers contain a thin layer of aluminum nitride, a piezoelectric that expands or contracts under voltage, thus moving the micromachine up and down and enabling the array to scan beams of light over a two-dimensional area.

Despite the magnitude of the team’s achievement, Eichenfield says that the process of engineering the cantilevers was “pretty smooth.” Each cantilever is composed of a stack of several submicrometer layers of material and curls approximately 90 degrees out of the plane at rest. To achieve such a high curvature, the team took advantage of differences in the contraction and expansion of individual layers caused by physical stresses in the material resulting from the fabrication process. The materials are first deposited flat onto the chip. Then, a layer in the chip below the cantilever is removed, allowing the material stresses to take effect, releasing the cantilever from the chip and allowing it to curl out. The top layer of each cantilever also features a series of silicon dioxide bars running perpendicular to the waveguide, which keep the cantilever from curling along its width while also improving its lengthwise curvature.

A micro-cantilever wiggles and waggles to project light in the right place.Matt Saha, Y. Henry Wen, et al.

What was more of a challenge than engineering the chip itself was figuring out the details of actually making the chip project images and videos. Working out the process of synchronizing and timing the cantilevers’ motion and light beams to generate the right colors at the right time was a substantial effort, according to Andy Greenspon, a researcher at MITRE who also worked on the project. Now, the team has successfully projected a variety of videos from a single cantilever, including clips from the movie A Charlie Brown Christmas.

A warped projection of the Mona Lisa. The chip projected a roughly 125-micrometer image of the Mona Lisa.Matt Saha, Y. Henry Wen, et al.

Because the chip can project so many more spots in any given time interval than any previous beam scanners, it could also be used to control many more qubits in quantum computers. The Quantum Moonshot program’s mission is to build a quantum computer that can be scaled to millions of qubits. So clearly, it needs a scalable way of controlling each one, explains Wen. Instead of using one laser per qubit, the team realized that not every qubit needed to be controlled at every given moment. The chip’s ability to move light beams over a two-dimensional area would allow them to control all of the qubits with many fewer lasers.

Another process that Wen thinks the chip could improve is scanning objects for 3D printing. Today, that typically involves using a single laser to scan over the entire surface of an object. The new chip, however, could potentially employ thousands of laser beams. “I think now you can take a process that would have taken hours and maybe bring it down to minutes,” says Wen.

Wen is also excited to explore the potential of different cantilever shapes. By changing the orientations of the bars perpendicular to the waveguide, the team has been able to make the cantilevers curl into helixes. Wen says that such unusual shapes could be useful in making a lab-on-a-chip for cell biology or drug development. “A lot of this stuff is imaging, scanning a laser across something, either to image it or to stimulate some response. And so we could have one of these ski jumps curl not just up, but actually curl back around, and then move around and scan over a sample,” Wen explains. “If you can imagine a structure that will be useful for you, we should try it.”

Read the whole story
satadru
5 days ago
reply
New York, NY
Share this story
Delete
Next Page of Stories