Hi, There! This week, the spotlight is firmly on AI's deepening roots in biology and medicineâfrom an EMBL platform that finally puts a century-old cancer theory to the test, to billion-dollar drug discovery labs and agentic AI systems that outperform doctors in diagnosing rare diseases. Meanwhile, the regulatory landscape is heating up with federal agency proposals and a wave of state chatbot bills. Let's dive in.
đ EVENT OF THE WEEK
Scientists at the European Molecular Biology Laboratory (EMBL) Heidelberg have developed MAGIC (Machine Learning-Assisted Genomics and Imaging Convergence)âan automated platform combining AI-powered microscopy with single-cell genome sequencing to test Theodor Boveri's century-old hypothesis that chromosomal abnormalities drive cancer development.
MAGIC operates like "AI-assisted laser tag," scanning for micronucleiâtiny DNA-containing structures linked to cancerâand tagging them with a photoconvertible dye. In less than a day, the system analyzed nearly 100,000 cells, revealing that over 10% of normal cell divisions spontaneously produce chromosomal errors. When the tumor suppressor gene p53 is mutated, this rate nearly doubles.
Why this matters: For over a century, scientists suspected chromosomal chaos was at the heart of cancer but lacked tools to test this at scale. MAGIC changes that by automating what was previously painstaking manual work, enabling researchers to study cancer's chromosomal origins with unprecedented speed and statistical power.
Key takeaways:
- Over 10% of normal cell divisions produce chromosomal abnormalities spontaneously
- p53 mutations nearly double this error rateâexplaining why it's cancer's most commonly mutated gene
- Developed with EMBL-EBI, DKFZ, and published in Nature
⥠Quick Updates
- NVIDIA & Eli Lilly: Announced a $1 billion AI co-innovation lab in the San Francisco Bay Area, combining Lilly's pharmaceutical expertise with NVIDIA's BioNeMo platform and Vera Rubin architecture for AI-driven drug discovery, robotics, and digital twins. NVIDIA Newsroom
- Illumina: Introduced the Billion Cell Atlas, capturing how 1 billion cells respond to CRISPR perturbations across 200+ disease-relevant cell lines. Created with AstraZeneca, Eli Lilly, and Merck to validate drug targets and train AI models at unprecedented scale. Illumina
- Nature (DeepRare): Agentic AI system DeepRare outperforms experienced physicians in diagnosing rare diseases (64.4% vs 54.6% first-attempt accuracy), integrating 40+ tools with transparent reasoning that specialists agreed with 95.4% of the time. Over 600 institutions registered. Nature
- OpenAI: Released GPT-5.4 with native computer-use capabilities, 1 million token context window, 33% fewer false claims, and mid-response course correction. First general-purpose model with state-of-the-art computer-use for autonomous agent workflows. OpenAI
- Illumina (TruPath): Launched TruPath Genome for rapid whole-genome rare disease testing with improved accuracy in difficult "dark" genomic regions and a ~10-minute hands-on workflow producing 16 genomes per day. PR Newswire
đ Top Research Papers
Institution: Multiple institutions | arXiv: March 10, 2026
PathMem introduces a memory-centric multimodal framework for pathology that mirrors how human pathologists thinkâorganizing diagnostic knowledge as long-term memory and using a Memory Transformer to activate relevant criteria during reasoning. Achieves 12.8% improvement in whole-slide image report precision and 9.7% improvement in open-ended diagnosis.
Pathology AI
Institution: Technical University of Munich | arXiv: March 10, 2026
Bypasses traditional image reconstruction entirely, extracting diagnostic cardiac information directly from raw MRI frequency data. Using 42,000 simulated subjects, achieves competitive performance across phenotype regression, disease classification, and anatomical segmentation without ever creating an image.
Cardiac Imaging
Institution: University of Oxford | arXiv: March 10, 2026
Establishes a comprehensive benchmark for evaluating AI on synchronized ECG and PPG biosignals. Across 22,256 visits and 20 clinical tasks, domain-specific biosignal models consistently outperform general time-series models, and multimodal ECG+PPG fusion yields robust improvements.
Wearable Health AI
Institution: Google | arXiv: March 10, 2026
Contrary to human behavior, LLMs become more honest when reasoning through moral trade-offs. Deceptive regions in model representations are "metastable"âeasily destabilizedâwhile honest defaults are more robust. Critical implications for clinical AI trustworthiness.
AI Safety
đť Top GitHub Repos of the Week
â 1,200+ stars | Python/JAX | Active development
Comprehensive suite of genomics foundation models: NTv3 (9 trillion base pairs, 1Mb context, single-base resolution), SegmentNT for genome segmentation, AgroNT for crop genomics, ChatNT for conversational genomic analysis, and single-cell transformers. Published in Nature Methods.
â 12,000+ stars | Python/TypeScript | #1 GitHub Trending
Open-source SuperAgent harness with sub-agents, sandboxed execution, long-term memory, and extensible skills. Ground-up rewrite supporting Slack, Telegram, and Feishu channels. Multi-agent orchestration ideal for complex bioinformatics pipeline automation.
â 20,000+ stars | Python | EMNLP 2025
Simple and fast Retrieval-Augmented Generation with graph-based architecture. Ideal for connecting drug-target relationships, gene-disease associations, and biomedical literature in structured knowledge graphs for AI-assisted clinical research.
â 8,000+ stars | TypeScript/Python | Active development
Open-source infrastructure for running AI-generated code in secure isolated sandboxes. Essential for reproducible computational biology workflows where untested bioinformatics scripts need safe execution before deployment on sensitive genomic data.
â 25,000+ stars | Python | Microsoft Research
Modular graph-based RAG system for navigating complex biomedical knowledgeâfrom protein interaction networks and drug pathways to clinical trial dataâenabling AI agents to reason over interconnected biological data for drug discovery.
đ Learning Blog of the Week
Author: Phillip Sloan | Publication: Interactive AI CDT Blog, University of Bristol
A firsthand account from the ML4H 2025 conference in San Diego exploring critical challenges in health AI. Covers Matthew McDermott's MEDS ecosystem for standardizing clinical data, Paul Liang's multimodal clinical foundation models, and Gabriel Brat's revealing paradox: radiologists often resist AI recommendations even when models are explainable.
What you'll learn:
- Why reproducibility remains the biggest bottleneck in clinical AI
- How multimodal foundation models reshape medical reasoning
- The human-AI trust gap in radiology and deployment implications
- Why better benchmarks matter as much as better models
đ ď¸ Top AI Products of the Week
645 upvotes | Category: AI Presentations
AI-powered design partner that turns notes, prompts, or existing decks into beautiful, on-brand slides through conversation. No more "AI slop"âit asks questions, builds drafts, and lets you refine through dialogue. Perfect for conference presentations and grant proposals.
624 upvotes | Category: Video Translation
Translates on-screen text inside videosâslides, diagrams, callouts, labelsâwhile preserving original layout, style, and animation. Layered with voice dubbing, lip-sync, and subtitles for fully translated videos. A game-changer for global medical education.
474 upvotes | Category: Code Review
Anthropic's multi-agent code review dispatches AI agents on every pull request to catch bugs that quick reviews miss. Detects security issues, hidden logic flaws, and verifies findings to reduce false positives. Critical for bioinformatics code reliability.
354 upvotes | Category: Open Source TTS
Open-source expressive text-to-speech with natural language voice direction across 80+ languages. Add cues like [whisper] or [laughing nervously] and generate multi-speaker dialogue. Valuable for patient education and multilingual clinical communication.
â ď¸ AI Criticism & Concerns
Steyer Proposes Federal AI Safety and Oversight Administration
Tom Steyer called for establishing the "AI Safety and Oversight Administration" (ASOA), a dedicated federal agency to enforce safety standards, conduct algorithm audits, and investigate AI-related harm. The proposal includes mandatory safety testing for critical infrastructure AI, transparency in algorithmic decisions for loans, hiring, and criminal justice.
Read more
78 Chatbot Bills Surge Across 27 States
Six weeks into the 2026 legislative session, 78 chatbot bills targeting child safety and mental health are alive in 27 statesâa rare bipartisan issue. Bills require AI disclosure, suicidal ideation detection protocols, and prevention of parasocial relationships with minors. Three in four teens use AI companions, fueling urgent legislative action.
Read more
Meta Oversight Board Demands Deepfake Detection Overhaul
The Meta Oversight Board labeled the company's deepfake detection fundamentally inadequate, especially during geopolitical conflicts. Demands include adoption of C2PA provenance standards, real-time AI detection, and dedicated community standards for synthetic contentâhighlighting the gap between AI generation capabilities and platform safety.
Read more
AI Healthcare Bias: Persistent Discrimination Against Marginalized Groups
Despite awareness campaigns, AI tools continue to systematically misjudge marginalized groupsâdowngrading women's care needs, offering unequal treatment plans by race, and embedding bias from drug dosing to triage. Researchers warn these biases amplify health disparities unless mandatory audits and diverse training data are enforced before clinical deployment.
Read more
Closing Note
This week reinforces a powerful truth: AI in biology is no longer aspirationalâit's operational. From EMBL's MAGIC platform validating century-old cancer theories to DeepRare outperforming doctors in rare disease diagnosis, the tools are becoming clinical-grade. But as the surge of chatbot bills and bias studies remind us, the pace of deployment must be matched by responsible governance.
The billion-dollar investments from NVIDIA, Lilly, and Illumina signal that the infrastructure for AI-powered medicine is being built at scale. The question now is whether our safety frameworks can keep up.
Thank you for reading PythRaSh's AI Newsletter! If you found this valuable, please share it with colleagues in biology and healthcare AI.
Have feedback or suggestions? Reply to this email - I read every response!
See you next week!
Md Rasheduzzaman
|