ruvnet / RuView

ruvnet / RuView

π RuView Beta Software — Under active development. APIs and firmware may change. Known limitations: ESP32-C3 and original ESP32 are not supported (single-core, insufficient for CSI DSP). Single ESP32 deployments have limited spatial resolution — use 2+ nodes or add a Cognitum Seed for best results. Camera-free pose accuracy is limited (PCK@20 ≈ 2.5% with proxy labels) — camera ground-truth training targets 35%+ PCK@20; the pipeline is implemented, but the data-collection and evaluation phases (ADR-079 P7–P9) are still pending, so no measured camera-supervised PCK@20 has been published yet. Contributions and bug reports welcome at Issues.

π RuView 测试版软件 — 处于活跃开发阶段,API 和固件可能会有变动。已知限制:不支持 ESP32-C3 和初代 ESP32(单核处理器,不足以处理 CSI DSP)。单台 ESP32 部署的空间分辨率有限 — 建议使用 2 个以上节点或添加 Cognitum Seed 以获得最佳效果。无摄像头姿态估计的准确度有限(使用代理标签时 PCK@20 ≈ 2.5%)— 摄像头真值训练的目标是 35%+ PCK@20;流水线虽已实现,但数据采集和评估阶段(ADR-079 P7–P9)尚未完成,因此目前尚未发布实测的摄像头监督 PCK@20 数据。欢迎在 Issues 页面提交贡献和错误报告。

See through walls with WiFi

Turn ordinary WiFi into a spacial intelligence / sensing system. Detect people, measure breathing and heart rate, track movement, and monitor rooms — through walls, in the dark, with no cameras or wearables. Just physics.

通过 WiFi 透视墙壁

将普通的 WiFi 转化为空间智能/感知系统。无需摄像头或可穿戴设备,即可在黑暗中、隔着墙壁检测人员、测量呼吸和心率、追踪运动并监控房间。一切皆物理。

π RuView is a WiFi sensing platform that turns radio signals into spatial intelligence. Every WiFi router already fills your space with radio waves. When people move, breathe, or even sit still, they disturb those waves in measurable ways. RuView captures these disturbances using Channel State Information (CSI) from low-cost ESP32 sensors and turns them into actionable data: who’s there, what they’re doing, and whether they’re okay.

π RuView 是一个将无线电信号转化为空间智能的 WiFi 感知平台。每个 WiFi 路由器都已经用无线电波填满了你的空间。当人们移动、呼吸甚至静坐时,都会以可测量的方式干扰这些波。RuView 利用低成本 ESP32 传感器的信道状态信息 (CSI) 捕获这些干扰,并将其转化为可操作的数据:谁在那里、他们在做什么,以及他们是否安全。

What it senses:

  • Presence and occupancy — detect people through walls, count them, track entries and exits
  • Vital signs — breathing rate and heart rate, contactless, while sleeping or sitting
  • Activity recognition — walking, sitting, gestures, falls — from temporal CSI patterns
  • Environment mapping — RF fingerprinting identifies rooms, detects moved furniture, spots new objects
  • Sleep quality — overnight monitoring with sleep stage classification and apnea screening

它能感知什么:

  • 存在与占用 — 隔墙检测人员、统计人数、追踪进出。
  • 生命体征 — 非接触式测量呼吸频率和心率,适用于睡眠或静坐状态。
  • 活动识别 — 通过时间 CSI 模式识别行走、坐姿、手势和跌倒。
  • 环境映射 — 通过射频指纹识别房间、检测家具移动、发现新物体。
  • 睡眠质量 — 提供过夜监测,包括睡眠阶段分类和呼吸暂停筛查。

Built on RuVector and Cognitum Seed, RuView runs entirely on edge hardware — an ESP32 mesh (as low as $9 per node) paired with a Cognitum Seed for persistent memory, cryptographic attestation, and AI integration. No cloud, no cameras, no internet required. The system learns each environment locally using spiking neural networks that adapt in under 30 seconds, with multi-frequency mesh scanning across 6 WiFi channels that uses your neighbors’ routers as free radar illuminators. Every measurement is cryptographically attested via an Ed25519 witness chain. RuView also supports pose estimation (17 COCO keypoints via the WiFlow architecture), trained entirely without cameras using 10 sensor signals — a technique pioneered from the original DensePose From WiFi research at Carnegie Mellon University.

RuView 基于 RuVector 和 Cognitum Seed 构建,完全运行在边缘硬件上 — 由 ESP32 网状网络(每个节点低至 9 美元)配合 Cognitum Seed 实现持久化存储、加密认证和 AI 集成。无需云端、无需摄像头、无需互联网。该系统使用脉冲神经网络在本地学习每个环境,可在 30 秒内完成自适应,并通过跨 6 个 WiFi 信道的多频网状扫描,将邻居的路由器用作免费的雷达照明源。每一项测量都通过 Ed25519 见证链进行加密认证。RuView 还支持姿态估计(通过 WiFlow 架构实现 17 个 COCO 关键点),完全无需摄像头,仅使用 10 个传感器信号进行训练 — 这项技术源自卡内基梅隆大学最初的“WiFi DensePose”研究。

Built for low-power edge applications

Edge modules are small programs that run directly on the ESP32 sensor — no internet needed, no cloud fees, instant response.

专为低功耗边缘应用打造

边缘模块是直接在 ESP32 传感器上运行的小型程序 — 无需互联网、无云服务费、响应即时。

WhatHowSpeed
🦴 Pose estimationCSI subcarrier amplitude/phase → 17 COCO keypoints171K emb/s (M4 Pro)
🫁 Breathing detectionBandpass 0.1-0.5 Hz → zero-crossing BPM6-30 BPM
💓 Heart rateBandpass 0.8-2.0 Hz → zero-crossing BPM40-120 BPM
👤 Presence sensingTrained model + PIR fusion — 100% accuracy0.012 ms latency
🧱 Through-wallFresnel zone geometry + multipath modelingUp to 5m depth
🧠 Edge intelligence8-dim feature vectors + RVF store on Cognitum Seed$140 total BOM
🎯 Camera-free training10 sensor signals, no labels needed84s on M4 Pro
📷 Camera-supervised trainingMediaPipe + ESP32 CSI → 35%+ PCK@20 target (ADR-079; eval phases pending)~19 min on laptop (pipeline)
📡 Multi-frequency meshChannel hopping across 6 bands, neighbor APs as illuminators3x sensing bandwidth
🌐 3D point cloud (optional fusion)Camera depth (MiDaS) + WiFi CSI + mmWave radar → unified spatial model22 ms pipeline · 19K+ points/frame
功能原理速度
🦴 姿态估计CSI 子载波幅值/相位 → 17 个 COCO 关键点171K emb/s (M4 Pro)
🫁 呼吸检测带通 0.1-0.5 Hz → 过零点 BPM6-30 BPM
💓 心率检测带通 0.8-2.0 Hz → 过零点 BPM40-120 BPM
👤 存在感知训练模型 + PIR 融合 — 100% 准确率0.012 ms 延迟
🧱 隔墙感知菲涅尔区几何 + 多径建模最深 5 米
🧠 边缘智能8 维特征向量 + Cognitum Seed 上的 RVF 存储总物料成本 $140
🎯 无摄像头训练10 个传感器信号,无需标签M4 Pro 上 84 秒
📷 摄像头监督训练MediaPipe + ESP32 CSI → 35%+ PCK@20 目标 (ADR-079; 评估阶段中)笔记本上约 19 分钟 (流水线)
📡 多频网状网络跨 6 个频段跳频,邻居 AP 作为照明源3 倍感知带宽
🌐 3D 点云 (可选融合)摄像头深度 (MiDaS) + WiFi CSI + 毫米波雷达 → 统一空间模型22 ms 流水线 · 19K+ 点/帧

Getting Started

Option 1: Docker (simulated data, no hardware needed)

docker pull ruvnet/wifi-densepose:latest
docker run -p 3000:3000 ruvnet/wifi-densepose:latest
# Open http://localhost:3000

选项 1:Docker(模拟数据,无需硬件)

docker pull ruvnet/wifi-densepose:latest
docker run -p 3000:3000 ruvnet/wifi-densepose:latest
# 打开 http://localhost:3000

Option 2: Live sensing with ESP32-S3 hardware ($9)

# Flash firmware, provision WiFi, and start sensing:
python -m esptool --chip esp32s3 --port COM9 --baud 460800 \
  write_flash 0x0 bootloader.bin 0x8000 partition-table.bin \
  0xf000 ota_data_initial.bin 0x20000 esp32-csi-node.bin
python firmware/esp32-csi-node/provision.py --port COM9 \
  --ssid "YourWiFi" --password "secret" --target-ip 192.168.1.20

选项 2:使用 ESP32-S3 硬件进行实时感知(9 美元)

# 烧录固件、配置 WiFi 并开始感知:
python -m esptool --chip esp32s3 --port COM9 --baud 460800 \
  write_flash 0x0 bootloader.bin 0x8000 partition-table.bin \
  0xf000 ota_data_initial.bin 0x20000 esp32-csi-node.bin
python firmware/esp32-csi-node/provision.py --port COM9 \
  --ssid "你的WiFi名称" --password "密码" --target-ip 192.168.1.20

Option 3: Full system with Cognitum Seed ($140)

# ESP32 streams CSI → bridge forwards to Seed for persistent storage + kNN + witness chain
node scripts/rf-scan.js --port 5006 # Live RF room scan
node scripts/snn-csi-processor.js --port 5006 # SNN real-time learning
node scripts/mincut-person-counter.js --port 5006 # Correct person counting

选项 3:使用 Cognitum Seed 的完整系统(140 美元)

# ESP32 流式传输 CSI → 网桥转发至 Seed 进行持久化存储 + kNN + 见证链
node scripts/rf-scan.js --port 5006 # 实时射频房间扫描
node scripts/snn-csi-processor.js --port 5006 # SNN 实时学习
node scripts/mincut-person-counter.js --port 5006 # 精确人数统计

Note: CSI-capable hardware recommended. Presence, vital signs, through-wall sensing, and all advanced capabilities require Channel State Information (CSI) from an ESP32-S3 ($9) or research NIC. The Docker image runs with simulated data for evaluation. Consumer WiFi laptops provide RSSI-only presence detection.

注意: 建议使用支持 CSI 的硬件。存在感知、生命体征、隔墙感知及所有高级功能均需要来自 ESP32-S3(9 美元)或研究级网卡的信道状态信息 (CSI)。Docker 镜像仅使用模拟数据进行评估。消费级 WiFi 笔记本电脑仅能提供基于 RSSI 的存在检测。

OptionHardwareCostFull CSI Capabilities
ESP32 + Cognitum Seed (recommended)ESP32-S3 + Cognitum Seed~$140Yes (Pose, breathing, heartbeat, motion, presence + persistent vector store, kNN search, witness chain, MCP proxy)
ESP32 Mesh3-6x ESP32-S3 + WiFi router~$54Yes (Pose, breathing, heartbeat, motion, presence)
Research NICIntel 5300 / Atheros AR9580~$50-100Yes (Full CSI with 3x3 MIMO)
Any WiFiWindows, macOS, or Linux laptop$0No (RSSI-only: coarse presence and motion)
选项硬件成本完整 CSI 功能
ESP32 + Cognitum Seed (推荐)ESP32-S3 + Cognitum Seed~$140是 (姿态、呼吸、心跳、运动、存在 + 持久化向量存储、kNN 搜索、见证链、MCP 代理)
ESP32 网状网络3-6x ESP32-S3 + WiFi 路由器~$54是 (姿态、呼吸、心跳、运动、存在)
研究级网卡Intel 5300 / Atheros AR9580~$50-100是 (带 3x3 MIMO 的完整 CSI)
任意 WiFiWindows, macOS 或 Linux 笔记本$0否 (仅 RSSI:粗略的存在和运动检测)

No hardware? Verify the signal processing pipeline with the deterministic reference signal: python archive/v1/data/proof/verify.py

没有硬件?请使用确定性参考信号验证信号处理流水线: python archive/v1/data/proof/verify.py

Real-time pose skeleton from WiFi CSI signals — no cameras, no wearables ▶ Live Observatory Demo | ▶ Dual-Modal Pose Fusion Demo | ▶ Live 3D Point Cloud

基于 WiFi CSI 信号的实时姿态骨架 — 无需摄像头,无需可穿戴设备 ▶ 实时观测演示 | ▶ 双模态姿态融合演示 | ▶ 实时 3D 点云

The server is optional for visualization and aggregation — the ESP32 runs independently for presence detection, vital signs, and fall alerts. Live ESP32 pipeline: Connect an ESP32-S3 node → run the sensing server → open the pose fusion demo for real-time dual-modal.

服务器仅用于可视化和聚合,并非必须 — ESP32 可独立运行以进行存在检测、生命体征监测和跌倒报警。实时 ESP32 流水线:连接 ESP32-S3 节点 → 运行感知服务器 → 打开姿态融合演示以实现实时双模态感知。