No gradients. No backprop. Just projections — a fundamentally different, mathematically grounded approach to neural network training that scales. Joint work with Manish Krishan Lal, Stefanie Jegelka and Suvrit Sra. 📄 https://lnkd.in/eajcfeH3 Here’s how it works 🧵 • We reformulate training as a 𝗳𝗲𝗮𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆 𝗽𝗿𝗼𝗯𝗹𝗲𝗺, not loss minimization. • Each neuron and data point add a constraint. • We then project onto the constraint sets, finding a point that satisfies all constraints = a trained model. Why this is cool: 1. projections are 𝗰𝗵𝗲𝗮𝗽; roughly the cost of a forward pass 2. they can be computed independently across neurons and data points -> 𝗶𝗻𝗵𝗲𝗿𝗲𝗻𝘁 𝗽𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 3. natural support for 𝗻𝗼𝗻-𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁𝗶𝗮𝗯𝗹𝗲 components and 𝗵𝗮𝗿𝗱 𝗰𝗼𝗻𝘀𝘁𝗿𝗮𝗶𝗻𝘁𝘀 We built a whole framework for this: 𝗣𝗝𝗔𝗫 • Think autodiff for projections. • Built on JAX, it inherits hardware acceleration & JIT, with a familiar interface. • We trained MLPs, CNNs, and RNNs with PJAX. • 🔗 https://lnkd.in/ea4pc-SG Looking forward to the community's response! The approach has potential beyond standard training — particularly for tasks with non-differentiable components or local constraints, like 𝗽𝗿𝘂𝗻𝗶𝗻𝗴, 𝗾𝘂𝗮𝗻𝘁𝗶𝘇𝗮𝘁𝗶𝗼𝗻, and 𝘀𝗽𝗮𝗿𝘀𝗲 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴.
Neurotechnology Innovations
Explore top LinkedIn content from expert professionals.
-
-
🧠🕊️ As a neuroscientist, this crosses a line. A Russian neurotech company claims it can remotely steer pigeons by implanting brain chips that stimulate flight decisions. No training. No learning. The bird “wants” to turn because its neural circuits are directly driven. They are called BirdDrones From a technical standpoint, this is not science fiction. Targeted stimulation of motor and reward circuits can bias behavior. We have known this for years in rodents and primates. What is new and disturbing is the normalization and scaling. A production line for brain implantation. Claims of 100% survival, without data. Field deployment in the wild. Framed as infrastructure monitoring. This is not about pigeons. It is about intent, governance, and boundaries. Once behavior can be externally authored rather than modulated, we are no longer studying the brain. We are overriding it. Neurotechnology needs rules that keep pace with the hardware. Otherwise, “can we do it” will keep outrunning “should we.” This should worry everyone, not just neuroscientists. #Neuroscience #Neuroethics #BrainComputerInterfaces #Bioengineering #DualUse #TechGovernance #ResearchEthics
-
Jonathan Boymal: "In a new paper, British philosopher Andy Clark (author of the 2003 book Natural Born Cyborgs, see comment below) offers a rebuttal to the pervasive anxiety surrounding new technologies, particularly generative AI, by reframing the nature of human cognition. He begins by acknowledging familiar concerns: that GPS erodes our spatial memory, search engines inflate our sense of knowledge, and tools like ChatGPT might diminish creativity or encourage intellectual laziness. These fears, Clark observes, mirror ancient worries, like Plato’s warning that writing would weaken memory, and stem from a deeply ingrained but flawed assumption: the idea that the mind is confined to the biological brain. Clark challenges this perspective with his extended mind thesis, arguing that humans have always been cognitive hybrids, seamlessly integrating external tools into our thinking processes. From the gestures we use to offload mental effort to the scribbled notes that help us untangle complex problems, our cognition has never been limited to what happens inside our skulls. This perspective transforms the debate about AI from a zero-sum game, where technology is seen as replacing human abilities, into a discussion about how we distribute cognitive labour across a network of biological and technological resources. Recent advances in neuroscience lend weight to this view. Theories like predictive processing suggest that the brain is fundamentally geared toward minimising uncertainty by engaging with the world around it. Whether probing a river’s depth with a stick or querying ChatGPT to clarify an idea, the brain doesn’t distinguish between internal and external problem-solving—it simply seeks the most efficient path to resolution. This fluid interplay between mind and tool has shaped human history, from the invention of stone tools to the design of modern cities, each innovation redistributing cognitive tasks and expanding what we can achieve. Generative AI, in Clark’s view, is the latest chapter in this story. While critics warn that it might stifle originality or turn us into passive curators of machine-generated content, evidence suggests a more nuanced reality. The key, Clark argues, lies in how we integrate these technologies into our cognitive ecosystems."
-
Your eye doctor might diagnose Alzheimer's before your neurologist does. And the test takes 5 minutes. New research shows retinal imaging detects early Alzheimer's with 93.5% accuracy. That's better than most brain scans. From a routine eye exam. Think about that: the same equipment checking for glaucoma could spot dementia 10 years early. Here's the science that could change things: 1. Your retina is literally brain tissue ↳ Same embryological origin as your brain ↳ Only place we can see living neurons directly ↳ Shares blood-brain barrier properties ↳ Changes mirror brain degeneration in real-time 2. What Alzheimer's looks like in your eye ↳ Thinning of specific retinal nerve fiber layers ↳ Reduced blood vessel density in the macula ↳ Altered vessel branching patterns ↳ Microhemorrhages you can't see or feel 3. The accuracy is remarkable ↳ 93.5% for early-onset Alzheimer's detection ↳ 86.3% for mild cognitive impairment ↳ AI analyzes patterns humans miss ↳ Non-invasive, no radiation, results in minutes 4. Why this beats traditional testing ↳ Brain MRI costs $3,000+ and takes hours ↳ PET scans require radioactive tracers ↳ Lumbar puncture is invasive and expensive ↳ Retinal imaging costs less than an oil change 5. The implementation gap ↳ Technology exists right now ↳ Equipment already in most eye clinics ↳ We're just not using it for dementia screening ↳ Insurance doesn't cover it yet for this purpose I do a basic retinal exam as part of my neurology visits but my eye can't catch these kinds of changes accurately enough. Just like we have eye screening tools in every pediatrics office, these devices would be lightweight and cheap enough to be in every PCP office. I've diagnosed over 1,000 dementia cases in 15 years. Most come too late. After years of decline. When interventions are less effective. This technology could flip that entirely. Your annual eye exam becomes dementia screening. Accessible. Affordable. Already available. The tools exist. We're just not using them. Imagine catching Alzheimer's when lifestyle changes, medications, and interventions actually work. Not after 50-70% of brain function is already lost. That's the future retinal imaging may offer. 💬 When was your last comprehensive eye exam? ♻️ Repost if you believe accessible early detection saves lives 👉 Follow me (Reza Hosseini Ghomi, MD, MSE) for breakthroughs in early diagnosis that are practical Citation: Hao, J., et al. (2024). Early detection of dementia through retinal imaging and trustworthy AI. npj Digital Medicine.
-
Collaborative innovation combining AI with neuropsychology is proving to be transformative. Six research clusters show specific value and potential: 🌱 Neuroscience and Mental Health: Understanding mental health through neuroimaging and machine learning enables earlier, more precise interventions for conditions like ADHD and depression. By examining correlations in brain function, this research helps identify key markers for cognitive impairments, aiding in early diagnosis and personalized treatment plans. 🔍 Computational Modeling: Computational models simulate decision-making and cognitive markers, which are crucial for neurological conditions like epilepsy. Machine learning applied to seizure detection, for instance, offers a potential breakthrough in predicting and managing epilepsy, helping patients gain better control and care. 🧠 Cognitive Neuroscience: Studies of cognitive decline and neurodegenerative diseases, such as Alzheimer’s, benefit from reinforcement learning models that reveal patterns in brain degeneration. These insights are essential for developing strategies to slow disease progression, offering hope for more effective interventions. 💡 Cognitive Neurology and Neuropsychology: Examining cognitive functions through neuroimaging and machine learning provides deeper insights into disorders like aphasia and neurocognitive deficits. By mapping brain functions and assessing structural changes, these studies advance our understanding of how specific neurological impairments affect behavior and cognition. 💗 Neuropsychological Features: Machine learning models predict mental health outcomes and cognitive declines by analyzing attention and processing speed. This focus on prediction and prevention, especially for conditions like cardiovascular disease impacting cognition, enables proactive care and lifestyle adjustments to mitigate risks. ⚙️ Neurodegenerative Conditions: AI-based predictive models for neurodegenerative diseases like Parkinson’s allow for early, more accurate diagnoses. By analyzing markers in social cognition and emotional processing, this cluster supports personalized interventions, helping to maintain patient quality of life and reduce care burdens. This is only the beginning. This field is absolutely ripe for rapid advance and massive real-world value.
-
FDA clears first blood test to help diagnose Alzheimer’s disease in the US: 🧠The FDA has cleared the first blood-based diagnostic to aid Alzheimer’s diagnosis in adults over 55 with cognitive symptoms, marking a major milestone for early detection 🧠 The test, developed by Fujirebio Diagnostics, measures the ratio of two proteins in blood plasma, pTau217 and β-amyloid 1-42, that correlate with amyloid plaque buildup in the brain 🧠 It’s not a stand-alone diagnostic, but when combined with other clinical information, it can help determine whether Alzheimer’s pathology is likely present 🧠 Compared to PET scans or spinal taps, the blood test is faster, cheaper, less invasive, and far more scalable in routine practice 🧠 Clinical validation showed 91.7% of those who tested positive had amyloid confirmed by PET or CSF tests, 97.3% of those with negative results were confirmed negative. 🧠 This clearance could improve access to new Alzheimer’s treatments like Leqembi (Eisai/Biogen) and Kisunla (Eli Lilly), which are most effective when started early but remain underprescribed 🧠 Biogen has partnered with Fujirebio and Eisai Co., Ltd. with C2N Diagnostics signaling pharma betting on blood tests to boost uptake of amyloid-targeting Alzheimer’s drugs #healthtech #pharma
-
This could be a watershed moment for AI as the 'Deep Learning' era may be evolving into something new. For the last decade, the researchers and engineers have focused on enhancing AI by stacking more layers, which characterizes, the Deep Neural Networks. But a seminal new paper from Google Research for NeurIPS 2025 exposes a fundamental flaw in this approach, these models are static! Once trained, modern models are frozen in time, experiencing a form of 'anterograde amnesia' where they cannot learn from the present without forgetting the past. The paper titled 'Nested Learning: The Illusion of Deep Learning Architectures' by Ali Behrouz, Meisam Razaviyayn, Peiling Zhong, and Vahab Mirrokni proposes a paradigm shift:- Nested Learning (NL). Instead of merely stacking layers, NL reimagines models as a system of 'nested optimization problems', each operating at its own speed. Inspired by human brain waves, where high-frequency neurons manage the immediate present and low-frequency oscillations consolidate long-term memory, this approach unlocks the potential for true continual learning. Additionally, the authors introduced HOPE, a new architecture based on this paradigm. HOPE demonstrates superior performance, surpassing Transformers, RetNet, and Titans in language modeling and reasoning tasks. This could serve as the blueprint for the next generation of AI. Blog - https://lnkd.in/dQ_vermU Paper - https://lnkd.in/di8wnF7r #ArtificialIntelligence #MachineLearning #GoogleResearch #NestedLearning #ContinualLearning #AI
-
AI is getting closer to accessing the one thing we’ve always considered private: your thoughts. Recent advances in neuro-AI can now identify whether a person recognizes specific information using EEG signals. A 2025 study using deep-learning reached 86.7% accuracy in detecting recognition through the P300 brain wave: a response triggered before conscious awareness. Meanwhile, some jurisdictions are already experimenting with this technology. 🇮🇳 India has used brain-mapping techniques in hundreds of criminal investigations, showing just how quickly neuroscience can enter real-world decision systems. But the implications go beyond law enforcement. AI models can now (fMRI + diffusion models): Reconstruct visual experiences directly from brain activity ✔️ Models that reconstruct what you’re seeing — in near real-time — based solely on your brain activity (Think: AI generating the images your eyes are looking at.) Decode unspoken language in early experimental settings ✔️ Models that reconstruct the words you’re thinking, even if you never speak A 2023–2024 wave of studies using fMRI + LLMs demonstrated the ability to decode semantic meaning of inner speech—turning thoughts into text-like outputs. This raises critical questions for business leaders, policymakers, and innovators: How do we prepare for a world where cognitive data becomes a new category of sensitive information? What safeguards, standards, and governance frameworks will protect mental privacy as neuro-AI scales? The technology is advancing faster than the regulations around it and the organisations that understand this early will be better positioned to navigate what comes next. #AI #Neuroscience #Innovation #Leadership #Ethics #FutureOfWork Reference: Kim, S., Cheon, J., Kim, T., Kim, S. C., & Im, C.-H. (2025). Improving electroencephalogram-based deception detection in concealed information test under low stimulus heterogeneity. arXiv. https://lnkd.in/dyVqBbG3 Takagi & Nishimoto (2022). High-resolution image reconstruction with latent diffusion models from human brain activity. BioRxiv. https://lnkd.in/dfc32mS7 Tang, J., LeBel, A., Jain, S. et al. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci 26, 858–866 (2023). https://lnkd.in/dnQxcS_d
-
Physics-Informed Neural Networks in Inversion Problems Inversion problems are present across various fields, such as geophysics, medicine, and materials science, where the primary goal is to estimate hidden parameters or reconstruct information based on observed data. Physics-Informed Neural Networks (PINNs) have emerged as a powerful tool for addressing these problems by embedding physical laws directly into the learning process of the network. The Challenge of Inversion Problems In many inversion problems, the objective is to uncover unknown parameters, such as medium properties or anomaly sources, from indirect measurements. Traditionally, these tasks are tackled using complex numerical methods that demand significant computational resources and often struggle with ambiguities and instability, especially when data is limited. PINNs stand out in inversion problems because they integrate observational data with prior knowledge of the governing physical laws, like partial differential equations (PDEs). By incorporating these physical constraints during training, PINNs can find solutions that respect physical consistency, making the results more robust even under conditions of sparse or noisy data. How It Works in Practice In an inversion problem using PINNs, the network is designed to minimize a composite loss function. This loss term includes both the observational data errors and the residual error of the PDEs that model the physical phenomenon. Through this process, the PINN adjusts the system’s unknown parameters so that the network's predictions align with both the available data and the physical laws, enabling reliable inference of unknown parameters. Applications Geophysics: PINNs are applied to infer subsurface properties, such as density and seismic velocities, from surface data. This approach allows for precise subsurface models without the costly conventional inversion methods. Medicine: In medical imaging, PINNs can be used to reconstruct high-quality images from low-resolution scans, reducing patient radiation exposure while preserving image quality. Advantages Robustness with Sparse Data: PINNs can bypass the need for large datasets by incorporating physical laws directly into the inference process. Noise and Ambiguity Reduction: Since PINNs consider the physical model, they tend to produce more consistent results that are less sensitive to noise in the observed data. Real-Time Applicability: In some cases, PINNs can be implemented to perform real-time inference, which is beneficial for monitoring and controlling dynamic physical systems. The application of PINNs to inversion problems represents a significant advancement, providing a way to solve complex problems more quickly, efficiently, and robustly. These methods are already transforming fields like geophysics and healthcare, and their potential continues to grow as more researchers apply this technology to estimate unknown parameters in physical systems.
-
Kolmogorov-Arnold Networks as an alternative to traditional Neural Networks! Researchers from MIT, Caltech, and Northeastern have introduced a new type of neural network architecture known as Kolmogorov-Arnold Networks (KANs), which presents a significant challenge to the traditional use of Multi-Layer Perceptrons (MLPs). KANs offer a novel approach to neural network architecture inspired by the Kolmogorov-Arnold representation theorem. This theorem essentially states that any multivariate continuous function can be represented as a composition of univariate functions and the addition operation. Translating this into neural network design, KANs uniquely place adaptable activation functions on the connections or edges between nodes rather than using standard fixed activation functions at the nodes themselves. This flexibility allows KANs to potentially model complex relationships and patterns more effectively, as they can tailor the transformation at each connection to better suit the specific data and task at hand, diverging from traditional networks where the choice of activation function at each layer is static and uniform across the network. In terms of accuracy, much smaller KANs can achieve comparable or better performance than larger MLPs on tasks such as data fitting and PDE solving. Moreover, KANs demonstrate faster neural scaling laws, meaning their performance improves more rapidly with increased model size compared to MLPs. KANs also excel in interpretability. They can be intuitively visualized and allow for easy interaction with human users. In case studies from knot theory and physics, KANs served as interactive "collaborators" to help scientists rediscover known mathematical and physical laws, showcasing their potential for scientific discovery. KANs could potentially serve as a foundation model for AI+Science applications and open opportunities to improve today's deep learning models that heavily rely on MLPs. Read the full paper for more details: https://lnkd.in/erEF6HbT :)