Iméra – Institute for Advanced Study

Institut Cancer et Immunologie – Aix-Marseille Université


↵ IMMUNOLOGY OVERVIEW PAGE

EEG: SLEEP & DREAM

Interactive immersive tool & installation for audiovisual translation of EEG data

[Examples to be added soon…]

In collaboration with Peter Simor (Université Libre de Bruxelles & Budapest Laboratory of Sleep and Cognition)

This project investigates the audio, visual, and haptic translation of polysomnographic data—including EEG and auxiliary physiology—as mediums for interactive exploration and neurophenomenological insight. Recordings spanning the full sleep cycle—wakefulness, drowsy onset, NREM stages 1–3 (light sleep, spindles, slow-wave sleep), and REM—are streamed from CSV at the original sampling rate (512 Hz) and routed into different translation engines for listening, comparison, and embodied perception.

In one mode, the system becomes an immersive multichannel “EEG choir”: each sensor is rendered as an individual voice whose continuous motion can be heard, compared, and physically felt in real time. In another, the same streams are downsampled within the sonification tools and mapped into MIDI-driven instrumental textures (e.g., strings/winds for EEG, and percussion/synth gestures for auxiliary physiology), prioritizing musical articulation and performable structure. Visitors and researchers can adjust each channel’s level and timbral emphasis (and mute/solo channels), enabling dynamic comparisons between spatial brain regions, physiological signals, and sleep stages.


How the system works (signal path)

Two layers run in parallel: (1) continuous raw streaming for immediate “voice” motion, and (2) a slower feature layer that shapes larger-scale sonic / spatial behavior.

Data source → streamer → OSC → audio / spatial engine → multichannel output
Polysomnographic recordings (EEG + EOG/ECG/EMG) are streamed from CSV at the original sampling rate (512 Hz) and routed as individual control streams. Each stream directly modulates a corresponding tone (or parameter cluster), producing a temporally precise multichannel texture.

Raw layer: “each sensor as a voice” (continuous, sample-accurate feel).
Feature layer: band-power contours and event-like triggers (slower updates) that steer timbre, diffusion, and stability.

What visitors can do (interaction)

The interface is designed for exploratory listening—like moving a microscope around an inner landscape.

  • Solo / mute / rebalance channels to compare regions (e.g., frontal vs occipital; left vs right).
  • Contrast brain vs physiology by emphasizing EEG or isolating EOG / ECG / EMG.
  • Stage comparison (wake → N1 → N2 → N3 → REM) by switching excerpts or navigating within a recording.
  • Focus listening modes (e.g., “micro-detail” vs “sleep weather”) by blending raw vs feature layers.
Feature layer: “sleep weather” (macro personalities)

To avoid arbitrary one-to-one mappings, the system groups derived features into a small set of perceptual “macro controls” that behave like weather patterns: density, agitation, brightness, stability, depth, and spatial diffusion.

Features are computed on short windows (seconds) with overlapping hops, then streamed via OSC at a slower rate than the raw layer.

This provides coherent, interpretable motion while preserving the raw microstructure in parallel.

Band / feature Perceptual role (example mapping)
Delta (slow-wave) “DARK / HEAVY” — thickness, gravity, slow spatial breathing
Theta “DREAM” — depth, drift, internal motion
Alpha “STILL” — stability, reduced jitter, calmer edge
Beta “EDGE” — forwardness / arousal, brightness, tension
Sigma (spindle range) Spindle “shimmers” — transient brightening / fluttering texture

(Mapping is intentionally adjustable: the same features can drive sound, light, spatialization, or haptics, depending on the installation context.)

Physiology layer (EOG / ECG / EMG)

Auxiliary physiology is treated as an equal musical / physical partner rather than “background data.”

  • ECG: can drive beat-triggered events (true heartbeat timing rather than an imposed tempo).
  • Inter-beat interval (IBI): can steer spatial diffusion / “breath” or other slow parameters for stable, legible motion.
  • EOG: eye movements can be rendered as lateral sweeps or gestural motion (especially resonant in REM).
  • EMG: muscle tone can contribute to noise components, texture, or haptic intensity.

Where needed, simple cleanup can be applied for reliability (e.g., peak-debounce for ECG triggers), while keeping the raw streams available for inspection.

Listening guide

This is not a classifier—think of it as a set of listening cues. Sleep is variable; the goal is to make tendencies perceptible.

  • Wake: often more alpha/beta presence; faster “edge” motion.
  • N1 (drowsy onset): alpha tends to soften; theta often emerges as drifting depth.
  • N2: spindles can appear as brief shimmering bursts; K-complexes can read as sudden large-scale gestures.
  • N3 / SWS: delta dominance—slow, heavy waves; strong synchronizations can feel like collective breathing.
  • REM: mixed-frequency textures; EOG gestures may become more prominent.
Sonification approach (why “each sensor as a voice”)

The sonification strategy has evolved over the course of the project.

Initial conception uses more downsampled / melodic mappings—musically coherent, but less faithful to fast temporal structure.

Later conception introduces continuous frequency modulation: each channel directly shapes the motion of its own tone in real time. The result is a richly textured, temporally precise choir in which rhythms and synchronizations become audibly present. One aim of this conception is interpretability through embodiment: preserving micro-timing while offering macro “weather” cues that help listeners orient themselves.

Channel map & signals (legend)
Signal group Examples
EEG (scalp) Frontal (F*), Central (C*), Temporal (T*), Parietal (P*), Occipital (O*)
EOG Left / right eye channels
ECG Heart electrical activity (optionally: R-peak triggers + IBI)
EMG Muscle tone / activity


VIBROACOUSTIC DILATANT DATA

EEG data from two (of 20+) channels are mapped and scaled into synthesized acoustic frequencies, which drive a subwoofer whose vibrations activate dilatant (sheer-thickening non-newtonian fluid) motion—creating a visual and haptic embodiment of neural rhythms during SWS (slow wave sleep).

SWS is a sleep stage during which significant ROS clearance occurs (essential for cellular repair and cancer prevention). The result is a visual/haptic metaphor for the resting body as an active site of repair…
[click here for more detailed info.]

The project includes interactive participatory public events (@IMéRA) exploring making and activating dilatant-covered subwoofers with audified data…

Interactive EEG sonification-cymatification-haptification…

Biofeedback EEG explorations:
https://dani.oore.ca/jade/

Late nights with Peter Simor editing & sonifying EEG sleep data in my IMéRA studio…