Biomedical Engineering, Medicine, Public Health, Open Source, Structural Solutions
16006 stories
·
226 followers

Amazon employees are "tokenmaxxing" due to pressure to use AI tools

1 Comment

The e-commerce group had posted team-wide statistics on AI usage by its staff, but recently limited access so that only employees themselves and managers can view their stats. Managers are discouraged from using token use to measure performance, according to a person familiar with the matter.

Meta employees have similarly engaged in so-called “tokenmaxxing” to improve their standing on internal leader boards.

The MeshClaw tool that some employees have used to increase their statistics was inspired by OpenClaw, which became a viral sensation in February. OpenClaw allows users to run agents locally on their own hardware, including computers and laptops.

Amazon’s MeshClaw can initiate code deployments, triage emails, and interact with apps such as Slack, according to people familiar with the matter.

The company said in a statement that the tool enabled “thousands of Amazonians to automate repetitive tasks each day” and was one example of the group “empowering teams” to experiment and adopt AI tools.

“We’re committed to the safe, secure, and responsible development and deployment of generative AI for our customers,” it added.

More than three dozen Amazon employees worked on the in-house tool, according to internal documents. One recent memo describing the bot said: “It dreams overnight to consolidate what it learned, monitors your deployments while you’re in meetings, and triages your email before you wake up.”

Multiple Amazon employees said they were concerned about the security risks of an AI tool that was granted permission to act on a user’s behalf. This risks situations where the agent may make errors or undertake unintended actions.

“The default security posture terrifies me,” one employee said. “I’m not about to let it go off and just do its own thing.”

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Read the whole story
satadru
3 days ago
reply
"Juking the stats"
New York, NY
Share this story
Delete

Tracing Olfactory Receptor Mapping Between the Nose and Brain

1 Comment

The way that the sense of smell works is that olfactory sensory neurons (OSNs) are wired up to olfactory receptors (ORs) in the nasal epithelium, from which they send signals to the brain. Once arrived there, a hierarchy of processing results in us experiencing the sensation of ‘smelling’. Exactly how the olfactory receptor-to-brain mapping works during development, and whether its physical pattern in the nasal epithelium is replicated in the brain, remained major questions until now. In a study published in Cell by [David H. Brann] and others, many of these questions have now been answered, at least for mice.

As it turns out, the mapping between OSNs and ORs isn’t performed by a random selection process, but instead creates a receptor map that’s closely matched between the nasal epithelium and the brain. What has complicated answering this question up till now is that the nasal epithelium isn’t a flat surface, but a convoluted labyrinth that maximizes surface area to smell better.

The second issue was linking the physical location of OSNs and gene expression in the nasal epithelium. Using a new approach, the researchers showed an intricate patterning in this epithelium, with the basal stem cells from which it regenerates maintaining this patterning. This makes for a system very similar to, for example, the auditory system, where the detection of frequencies in the inner ear, as a linear system, is found to be replicated in the brain.

Although it does not provide us with all the answers yet about how this genetic patterning works, it offers a glimpse at a fascinating system that would seem to be used repeatedly across sensory systems. It may also provide potential treatments for medical conditions affecting the olfactory system, whereby the sense of smell is missing, reduced, or oddly miswired, for example, after a SARS-CoV-2 infection of the olfactory nerve that leads to symptoms such as a constant sensation of a burning smell.

You have to wonder if a better understanding of the nose will revive interest in digitally creating and sending smells?

Read the whole story
satadru
5 days ago
reply
This is the hackaday content my brain yearns for.
New York, NY
Share this story
Delete

Why don’t lowercase letters come right after uppercase letters in ASCII?

1 Comment

With that context, I always found it strange that the designers of ASCII included 6 characters after uppercase Z before starting the lowercase letters. Then it hit me: we have 26 letters in the English alphabet, plus 6 additional characters before lowercase starts: 26 + 6 = 32. If you know anything about computers, powers of 2 tend to stick out. Let’s take a look at the binary representations of some characters compared to their lowercase counterparts.

↫ Tyler Hillery

I only have a middling understanding of the rest of the article and thus the ultimate reason why ASCII includes those six characters between Z and a, but I think it comes down to making certain operations on uppercase and lowercase letters specifically more elegant. In some deep crevices of my brain all of this makes sense, but I find it very difficult to truly understand and explain as someone who knows little about programming.

Read the whole story
satadru
5 days ago
reply
lovely
New York, NY
Share this story
Delete

Google is tying reCAPTCHA to Google Play Services, screwing over de-Googled Android users

1 Share

The ways in which Google can lock you into their ecosystem are often obvious, but sometimes, they’re incredibly sneaky and easily missed.

CAPTCHA tests are annoying, but at the same time, they can help protect websites from bots. While these tests are already the bane of our internet existence, they are going to get worse for some Android users. A requirement for Google’s next-generation reCAPTCHA system will make it a lot harder for de-Googled phones to browse the web.

A Reddit user has highlighted a seemingly innocuous support page for Google’s reCAPTCHA system. The page in question relates to troubleshooting reCAPTCHA verification on mobile. In the document, it says that you’ll need to use a compatible mobile device to complete verification. If you have an Android phone, then that means you’ll need to be running Google Play Services version 25.41.30 or higher.

↫ Ryan McNeal at Android Authority

When was the last time you actively thought about reCAPTCHA being a Google property? Even then, when was the last time you imagined something as annoying but ultimately basic as a captcha prompt could be used to tie people to Google Play Services, and thus to “blessed” Android? Every time we manage to work around one of these asinine ties to Google Play Services, another one pops up to ruin our day. We’re so stupidly tied down to and entirely dependent on two very mid – at best – mobile operating systems, and it’s such a stupid own goal for especially everyone outside of the US to just sit there and do nothing about it.

Worse yet, it seems we’re only tying ourselves down further, while paying for the privilege.

At the very least we should be categorising certain services – government ID services, payment services, popular messaging platforms, and a few more – as vital infrastructure, and legally mandate these services have clearly defined and well-documented APIs so anyone is free to make alternative clients. The fact that many people are tied to either iOS or “blessed” Android because of something as stupid as what bank they use or the level of incompetency of their government ID service should be a major crisis in any country that isn’t the US.

I don’t want to use iOS or Android, but nobody is leaving me any choice. It’s infuriating.

Read the whole story
satadru
5 days ago
reply
New York, NY
Share this story
Delete

Challenging The Way We Pedal

1 Share

The bicycle is an invention that has not changed in its fundamentals since the first recognisably modern machines appeared in the closing years of the 19th century. Its frame uses a structure of two triangles, its wheels are equal in size, and it’s propelled by a pedal crank and (in most cases) a chain. Bicycles have improved vastly in materials and performance, but if you were to wheel a 2026 tourer into an 1886 bike shop, the Victorian proprietor would recognise it. Only a very brave engineer would try to fundamentally change such a formula, but here’s [Not programming] with a crankless bicycle.

The idea is to replace the crank’s circular motion with a linear one, thus providing a more constant propulsion. The build was inspired by another that used a sinusoidal track in a rotating cylinder to achieve the necessary conversion. This design takes a different tack, using an arrangement of gears and freewheels he describes as a mechanical rectifier to convert the back-and-forth motion of pedaling into rotation. The pedals themselves are stirrups mounted at each end of a V-belt.

This build is an exercise in pushing the limits of 3D print strength, as prototype after prototype shears under load. He does finally get the thing to work, though, and we admire his persistence. Oddly, this isn’t the first 3D-printed bicycle geartrain we’ve seen.

Read the whole story
satadru
5 days ago
reply
New York, NY
Share this story
Delete

Speech Jammer Gets Jammed Up

1 Share

This project is perhaps the single most passive-aggressive thing we’ve ever seen on this site: rather than tell someone directly to ‘shut up’, [Blytical]’s speech jammer lets you hack their brain from across the room to stop them from speaking. It’s also a bit of an object lesson in why you shouldn’t just copy reference implementations without careful study — by his own implementation, [Blytical] was forced to learn a lot more than he intended going into this project.

The brain hack behind it is called ‘delayed auditory feedback’: by feeding their speech back to the target with a short delay — only 50 to 200 ms — it creates a confounding effect that is apparently very difficult to speak through. The array of ultrasound transducers is used to accurately aim the audio by serving as an inaudible, low-spread carrier wave, as we saw in another project this year. A shotgun mike picks up the audio from the speaker you wish to harass, and an array of audio processing circuitry takes care of the rest.

That’s where problems happen, as [Blytical] admits he just tossed some reference implementations onto a PCB without bothering to think too hard about what he was doing. It’s the datasheet version of vibe coding, and it usually goes about as well — sometimes perfectly, but rarely without a lot of troubleshooting. That troubleshooting is really, really hard when you don’t quite understand why things were laid out the way they were on the datasheet. We don’t blame [Blytical], you can learn a lot when you bite off more than you can chew. The fact that he risked this failure mode rather than do the whole thing in software with a Pi says good things about how he’s conducting his education.

It’s a shame, though, because we’ve been waiting to see another one of these speech jammers in action for quite some time. Perhaps someone will try again; the ultrasonic array portion seems solved, so if the delay circuit was the problem, perhaps a tiny tape loop would suffice.

Read the whole story
satadru
5 days ago
reply
New York, NY
Share this story
Delete
Next Page of Stories