⚕️ RadFabric:

Interpretable Agentic AI System with Localized Reasoning for Radiology

Overview

RadFabric Framework

Chest X-ray (CXR) imaging remains a critical diagnostic tool for thoracic conditions, but current automated systems face limitations in pathology coverage, diagnostic accuracy, and integration of visual and textual reasoning.

To address these gaps, we propose RadFabric, a multi-agent, multimodal reasoning framework that unifies visual and textual analysis for comprehensive CXR interpretation. RadFabric is built on the Model Context Protocol (MCP), enabling modularity, interoperability, and scalability for seamless integration of new diagnostic agents.

The system employs specialized CXR agents for pathology detection, an Anatomical Interpretation Agent to map visual findings to precise anatomical structures, and a Reasoning Agent powered by large multimodal reasoning models to synthesize visual, anatomical, and clinical data into transparent and evidence-based diagnoses.

RadFabric achieves significant performance improvements, with near-perfect detection of challenging pathologies like fractures (1.000 accuracy) and superior overall diagnostic accuracy (0.799) compared to traditional systems (0.229–0.527). By integrating cross-modal feature alignment and preference-driven reasoning, RadFabric advances AI-driven radiology toward transparent, anatomically precise, and clinically actionable CXR analysis.

Key Features

Multi-Agent Architecture

Employs specialized, collaborative agents (e.g., pathology detection, anatomical interpretation, reasoning) for distinct tasks.

Multimodal Reasoning

Unifies visual (CXR images) and textual (clinical data, reports) analysis for comprehensive interpretation.

Model Context Protocol (MCP)

Provides the foundation for modularity, interoperability, and scalability, enabling seamless integration of new diagnostic agents.

Specialized CXR Agents

Includes dedicated agents for specific functions like pathology detection.

Anatomical Interpretation Agent

Explicitly maps visual findings to precise anatomical structures, enhancing diagnostic precision.

Reasoning Agent

Uses Large Multimodal Reasoning Models to synthesize visual findings, anatomical mappings, and clinical data.

Evidence-Based & Transparent Diagnoses

Generates diagnoses that are clinically actionable, evidence-based, and transparent.

Usage

1️⃣ Dataset

We use MIMIC-CXR dataset to test our method. The MIMIC Chest X-ray (MIMIC-CXR) Database v2.0.0 is a large publicly available dataset of chest radiographs in DICOM format with free-text radiology reports. The dataset contains 377,110 images corresponding to 227,835 radiographic studies performed at the Beth Israel Deaconess Medical Center in Boston, MA.

The dataset is de-identified to satisfy the US Health Insurance Portability and Accountability Act of 1996 (HIPAA) Safe Harbor requirements. Protected health information (PHI) has been removed. The dataset is intended to support a wide body of research in medicine including image understanding, natural language processing, and decision support. Source: MIMIC-CXR.

2️⃣ Used Models

We implemented various open-source classification models to address the medical diagnosis problems in chest X-rays, including:

3️⃣ Inference

In this repository we provided two versions of the inference method: w/ MCP version and the w/o MCP version.

w/ MCP Version
w/o MCP Version

This project demonstrates a complete workflow for deploying a pre-trained DenseNet121 model (trained on chest X-ray images) as a Flask HTTP API, exposing it via an MCP server, and interacting through a Python client. Source: w/ MCP on GitHub

Repository Structure

  • flask_torchxray.py: Flask application wrapping the DenseNet121 model. Exposes a /predict POST endpoint that accepts an X-ray image and returns multi-label pathology probabilities in JSON.
  • torch_mcp_server.py: MCP server implementation that registers a predict_via_flask tool, forwarding image inference requests to the Flask API.
  • client.py: Python client demonstrating how to call the MCP server's predict_via_flask tool and display the returned JSON results.

Prerequisites

  • Python: 3.8 or later
  • Hardware: (Optional) CUDA-enabled GPU for accelerated inference
  • Tools:
    • tmux (recommended for long-running processes)
    • MCP packages (mcp-server, mcp-client)

Materials

How to Create & Deploy a Production-Ready MCP Server

Transform your locally running Python-SDK MCP server into a fully packaged, containerized service. Below is a comprehensive workflow for packaging, deploying, and registering your server in the official modelcontextprotocol/servers repository.

📁 1. Organize Your Project Structure

Repository structure:

my-mcp-server/
├── server.py            # FastMCP-based entrypoint
├── requirements.txt     # Python dependencies
├── README.md            # Usage, CLI examples, Env vars
└── tests/               # Unit tests (pytest)

Documentation requirements:

  • Show installation steps: pip install .
  • Include run commands: mcp-server --http :8080
  • Provide tool definitions examples in server.py
🐳 2. Containerize with Docker

Example Dockerfile:

FROM python:3.10-slim AS build
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

FROM python:3.10-slim
WORKDIR /app
COPY --from=build /usr/local/lib/python3.10/site-packages /usr/local/lib/python3.10/site-packages
COPY . .
EXPOSE 8080
ENTRYPOINT ["python", "server.py"]

Build and push:

docker build -t yourorg/my-mcp-server:1.0.0 .
docker push yourorg/my-mcp-server:1.0.0
📦 3. Publish as a Python Package

Setup configuration:

In setup.py or pyproject.toml, define console scripts:

[project]
name = "my-mcp-server"
version = "1.0.0"
entry-points =
  console_scripts =
    mcp-server = server:main

Publish to PyPI:

pip install build twine
python -m build
twine upload dist/*

Install & run:

pip install my-mcp-server
mcp-server --http :8080
🌐 4. Register in Official MCP Servers Repository

Following the CONTRIBUTING.md guidelines:

  1. Fork & clone the modelcontextprotocol/servers repository
  2. Create a branch and open README.md (under "Reference servers")
  3. Add your entry:
    - **[My MCP Server](https://github.com/yourorg/my-mcp-server)** – Brief description of functionality.
  4. Submit a PR following the project's template

🎯 Result: By following these steps—project organization, Dockerization, PyPI packaging, and official registration—you transform a local FastMCP process into a polished, shareable MCP Server that anyone can install and run with a single command.

This project demonstrates how to combine large reasoning models (LRMs) and CV-based model to finish reliable diagnosis in chest X-ray vision. Source: w/o MCP on GitHub

Repository Structure

  • Model_input: Input classification results and location information.
  • inference.py: Use LRMs to infer the lesions' possibilities based on the model input.

Prerequisites

  • Python: 3.11 or later with packages: pip install requirement.txt
  • APIKEY: Use your API-Key of your preferred platform.

Citing

If you use RadFabric in your research, please cite our work:

@article{radfabric2024, ... }