Real-Time Artifact Removal for Human-Robot Interaction: Advanced Signal Processing for Robust Brain-Centered Applications

Michael Long Dec 02, 2025 460

This article provides a comprehensive analysis of real-time artifact removal techniques critical for reliable Human-Robot Interaction (HRI) systems.

Real-Time Artifact Removal for Human-Robot Interaction: Advanced Signal Processing for Robust Brain-Centered Applications

Abstract

This article provides a comprehensive analysis of real-time artifact removal techniques critical for reliable Human-Robot Interaction (HRI) systems. Targeting researchers and development professionals, we explore the foundational challenges of physiological and motion artifacts in EEG and other biosignals, review state-of-the-art methodological approaches including ICA, ASR, iCanClean, and adaptive filtering, and offer practical troubleshooting strategies for optimizing performance in ecological settings. Through comparative validation of techniques and discussion of emerging trends like deep learning and auxiliary sensors, this work synthesizes key principles for developing robust, real-time artifact removal pipelines that ensure accurate emotion recognition, intent decoding, and seamless brain-robot communication.

The Critical Challenge: Understanding Artifacts in Real-World HRI Systems

In human-robot interaction (HRI) research, the accurate interpretation of biological signals is paramount for developing responsive and adaptive systems. Electroencephalography (EEG) and other biosignals provide a non-invasive window into a user's cognitive and emotional state, enabling robots to respond to human intention and affect. However, these signals are invariably contaminated by artifacts—unwanted signals from non-neural sources—which can severely degrade system performance. Effective artifact removal is particularly critical in real-time HRI applications, where the fidelity of the processed signal directly impacts the robot's ability to make appropriate and timely decisions. This note defines the major categories of artifacts, summarizes removal performance data, and provides detailed experimental protocols for their mitigation within HRI research contexts.

Artifact Classification and Characteristics

Artifacts in biosignal recordings are broadly classified into three categories based on their origin: physiological, motion-related, and environmental. The table below delineates their sources, characteristics, and impact on signals.

Table 1: Classification and Characteristics of Common Artifacts

Artifact Category Specific Type Origin Frequency Band Key Characteristics Impact on HRI
Physiological Ocular (EOG) Eye blinks and movements [1] Slow frequencies, below 5 Hz [2] High amplitude, slow waves [1] Obscures prefrontal cortex signals critical for emotion estimation [2] [3]
Muscle (EMG) Muscle contractions (face, neck, jaw) [1] Broad spectrum (20–300 Hz) [2] [1] High-frequency, transient spikes [1] Masks neural activity in beta/gamma bands, crucial for intention estimation [2] [4]
Cardiac (ECG/PPG) Heartbeat and pulse [1] ~1.2 Hz (pulse) [1] Regular, periodic waveform [1] Can be mistaken for rhythmic brain activity; introduces periodic noise [1]
Motion Head Movement Cable sway, electrode displacement [5] [6] Overlaps with neural bands [5] Baseline shifts, amplitude bursts [5] Causes significant signal distortion during mobile HRI tasks [5] [6]
Gait-Related Heel strike during walking [5] Low frequency Arrhythmic amplitude bursts [5] Corrupts signals in mobile brain-imaging scenarios [5]
Environmental Power Line Noise Electrical mains [2] [1] 50/60 Hz and harmonics [2] Stationary, narrowband interference [1] Obscures neural signals in the gamma band [2]
Technical Faulty electrodes, cable movement [1] Varies Sudden signal drops or spikes [1] Creates non-physiological signal patterns, leading to misinterpretation [1]

Quantitative Performance of Artifact Removal Methods

Selecting an appropriate artifact removal method requires an understanding of its performance under specific conditions. The following table synthesizes quantitative results from recent studies for easy comparison.

Table 2: Performance Comparison of Selected Artifact Removal Methods

Method Name Algorithm Type Artifacts Targeted Key Performance Metrics Reported Performance Suitability for Real-Time HRI
iCanClean [6] Real-time capable filtering Motion, Muscle, Eye, Line-noise Data Quality Score (0-100%) [6] Improved from 15.7% to 55.9% (all artifacts) [6] High (validated on mobile data) [6]
Motion-Net [5] Subject-specific CNN Motion Artifact Reduction (η): 86% ±4.13; SNR Improvement: 20 ±4.47 dB [5] 86% ±4.13 artifact reduction [5] Medium (subject-specific training required) [5]
Mutual Information (Epanechnikov) [7] Blind Source Separation (BSS) General (tested for emotion recognition) Classification Accuracy [7] 80.13% accuracy [7] Medium (computational cost depends on implementation) [7]
Adaptive Filtering [4] Modified LMS Adaptive Filter TENS feedback artifact in sEMG Signal-to-Noise Ratio (SNR) Improvement [4] SNR increase of 10.3 dB [4] High (designed for real-time prosthetic control) [4]
1D-CNN with Penalty [8] Convolutional Neural Network Motion in fNIRS Signal-to-Noise Ratio (SNR) Improvement [8] SNR improvement > 11.08 dB [8] High (processing time: 0.53 ms/sample) [8]

Detailed Experimental Protocols

Protocol 1: Real-Time Ocular and Motion Artifact Removal for Affective HRI

This protocol is adapted from methodologies optimized for real-time emotion estimation in HRI, using a minimal set of electrodes for wearability [2] [3].

1. Objective: To acquire clean EEG signals for emotion estimation in real-time by removing ocular and motion artifacts. 2. Materials and Setup:

  • EEG System: A mobile EEG system with active electrodes to reduce cable motion artifacts [6].
  • Electrode Montage: Focus on eight key electrodes for emotion recognition: AF3, T7, TP7, P7, AF4, T8, TP8, P8 [2] [3].
  • Software: A processing pipeline capable of implementing real-time filters and the iCanClean algorithm [6].

3. Procedure:

  • Step 1: Basic Filtering. Apply a 1–50 Hz bandpass filter to remove very low-frequency drifts and high-frequency muscle noise. Use a 50 Hz (or 60 Hz) notch filter to suppress power line interference [2].
  • Step 2: Real-time Ocular Correction. Implement a real-time capable algorithm, such as an adaptive filter using a forehead electrode as a noise reference or a simplified ICA-based approach, to identify and subtract ocular components [2] [6].
  • Step 3: Motion Artifact Suppression. Apply the iCanClean algorithm. This method uses a reference noise signal, which can be derived from the EEG data itself, to clean motion artifacts without additional hardware [6].
  • Step 4: Feature Extraction and Smoothing. Extract stable features from the beta (16–30 Hz) and gamma (30–50 Hz) bands. Apply a feature smoothing technique, such as a Linear Dynamic System (LDS) or moving average, to reduce temporal variability before classification [2] [3].

The following workflow diagram illustrates the real-time processing pipeline:

G RawEEG Raw EEG Signal Filter Bandpass & Notch Filter RawEEG->Filter OcularRemoval Real-time Ocular Artifact Removal Filter->OcularRemoval MotionRemoval Motion Artifact Removal (e.g., iCanClean) OcularRemoval->MotionRemoval FeatureExtraction Feature Extraction & Smoothing MotionRemoval->FeatureExtraction CleanSignal Cleaned Signal for HRI FeatureExtraction->CleanSignal

Protocol 2: Subject-Specific Deep Learning for Motion Artifact Removal

This protocol uses the Motion-Net deep learning model for high-fidelity removal of motion artifacts, ideal for scenarios with repetitive but subject-specific movements [5].

1. Objective: To train a subject-specific convolutional neural network (CNN) for removing motion artifacts from mobile EEG data. 2. Materials and Setup:

  • EEG System: A high-density mobile EEG system.
  • Accelerometer: A synchronized inertial measurement unit (IMU) attached to the head to provide a ground-truth reference of motion [5].
  • Computing Platform: A GPU-enabled computer for model training.

3. Procedure:

  • Step 1: Data Collection and Preprocessing. Collect EEG data while the subject performs standardized movements (e.g., walking, nodding). Synchronize the EEG and accelerometer data. Preprocess the data by resampling and performing baseline correction [5].
  • Step 2: Feature Engineering. Extract Visibility Graph (VG) features from the EEG signals. These features convert time-series signals into graph structures, providing robust input for the model that enhances performance on smaller datasets [5].
  • Step 3: Model Training. Train the Motion-Net model, a 1D U-Net architecture, using the preprocessed EEG data as input and the synchronized accelerometer-corrected data or clean segments as the target output. The training is performed separately for each subject [5].
  • Step 4: Validation. Validate the model on a held-out dataset from the same subject. Calculate performance metrics such as artifact reduction percentage (η), Signal-to-Noise Ratio (SNR) improvement, and Mean Absolute Error (MAE) [5].

The workflow for this subject-specific approach is outlined below:

G DataCollection Collect Subject EEG & Motion Data Preprocessing Data Preprocessing & Synchronization DataCollection->Preprocessing FeatureEng Feature Extraction (e.g., Visibility Graph) Preprocessing->FeatureEng ModelTraining Subject-Specific Model Training (Motion-Net) FeatureEng->ModelTraining Validation Model Validation & Performance Check ModelTraining->Validation Validation->ModelTraining Retrain/Adjust Deploy Deploy Trained Model Validation->Deploy Success

The Scientist's Toolkit: Research Reagent Solutions

Table 3: Essential Materials and Tools for Artifact Removal Research

Item Name Function/Application Specific Example/Note
Mobile EEG System with Active Electrodes Records brain activity with reduced susceptibility to motion artifacts compared to passive electrodes. Essential for any HRI study involving movement [6].
Electrooculogram (EOG) Electrodes Records eye movements and blinks to serve as a reference signal for regression-based or adaptive filtering methods. Placed above, below, and to the side of the eyes [1].
Inertial Measurement Unit (IMU) Measures head acceleration and movement, providing a reference signal for motion artifact removal algorithms. Used in Motion-Net and adaptive filtering approaches [5].
Transcutaneous Electrical Nerve Stimulation (TENS) Unit Provides sensory feedback in prosthetic HRI studies; also a source of known artifact for testing removal algorithms. Artifacts can be removed using adaptive filters as shown in prosthetic hand research [4].
iCanClean Algorithm A real-time capable, generalized framework for removing multiple artifact sources without separate noise sensors. Outperformed ASR, Auto-CCA, and Adaptive Filtering in phantom head tests [6].
Motion-Net Deep Learning Model A subject-specific CNN for high-accuracy motion artifact removal when sufficient per-subject training data is available. Effective with smaller datasets when using Visibility Graph features [5].
Visibility Graph (VG) Feature Extraction A method to convert time-series signals into graph structures, improving deep learning model performance on smaller datasets. Used to enhance the accuracy of Motion-Net [5].

Why Real-Time Processing is Non-Negotiable for Fluid HRI

In human-robot interaction (HRI), real-time processing is a fundamental requirement rather than a luxury. Delays in robotic decision-making directly compromise system responsiveness, safety, and user trust, particularly in dynamic service contexts where computational efficiency is critical [9]. The core challenge lies in designing intelligent computing architectures that can rapidly process multimodal inputs—from queries to biological signals—while maintaining high accuracy and low failure rates [9].

This document provides application notes and experimental protocols for implementing real-time processing systems in HRI research, with particular emphasis on artifact removal techniques for brain-computer interfaces and query-processing frameworks. The quantitative data and methodologies presented serve as essential references for researchers developing next-generation interactive robotic systems.

Quantitative Performance Analysis

Computational Framework Performance Metrics

Table 1: Performance Comparison of HRI Computing Models [9]

Performance Metric HICM Framework CDS Model DGTA Model CCS Model
Calculation Time Reduction 8.67% improvement Baseline Not specified Not specified
Service Time Reduction 15.09% improvement Baseline Not specified Not specified
Failure Rate Reduction 7.87% improvement Baseline Not specified Not specified
Success Factor 11.8% higher Baseline Not specified Not specified
Matching Ratio 14.88% higher Baseline Not specified Not specified
Failure Rates 6.22% lower Baseline Not specified Not specified
Brain-Computer Interface Performance Requirements

Table 2: Real-Time EEG Processing Specifications for HRI [3] [10]

Processing Parameter Target Specification Application Context
Temporal Resolution ≤ 1 ms EEG signal acquisition
Artifact Removal Real-time EOG/blink removal Prefrontal cortex signals
Frequency Bands Delta (0.1-3.5 Hz), Theta (4-7.5 Hz), Alpha (8-13 Hz), Beta (14-30 Hz), Gamma (30-50 Hz) Emotion estimation, intentional command detection
Electrode Placement 10-20 system, emphasis on AF3, T7, TP7, P7, AF4, T8, TP8, P8 Emotion estimation, eye artifact detection
Key Electrode Positions F7, F8 channels for lateral eye movement detection Phase difference analysis for eye movement

Experimental Protocols

Protocol: Real-Time Query Processing for Service Robots

Objective: Implement and validate the Hybrid Intelligent Computing Model (HICM) for robotic query processing and service provision [9].

Materials:

  • Robotic platform with processing unit
  • Human-robot communication interface
  • Service request simulation environment
  • Performance monitoring toolkit

Procedure:

  • System Configuration
    • Implement annealing and Tabu Search approaches for service matching
    • Configure decision-support system to handle human questions
    • Set up non-convergent optimization framework for query resolution
  • Performance Benchmarking

    • Execute standardized query sets across CDS, DGTA, CCS, and HICM models
    • Measure calculation time, service time, and failure rates
    • Calculate success factors and matching ratios for each framework
  • Validation Testing

    • Deploy in dynamic service contexts with varying query complexity
    • Monitor system performance under increasing computational loads
    • Validate robustness through failure rate analysis across 1000+ interactions

Data Analysis:

  • Compare pre- and post-implementation metrics using paired t-tests
  • Calculate percentage improvements across all performance dimensions
  • Correlate computational efficiency with user satisfaction measures
Protocol: Real-Time EEG Artifact Removal for Affective HRI

Objective: Implement optimized real-time electro-oculographic (EOG) artifact removal and emotion estimation for human-robot interaction applications [3].

Materials:

  • EEG acquisition system (e.g., TMSi SAGA 64+) [10]
  • 8-electrode setup (AF3, T7, TP7, P7, AF4, T8, TP8, P8)
  • Signal processing software (MATLAB, Python with MNE, or similar)
  • Robot platform with emotion adaptation capabilities

Procedure:

  • Signal Acquisition Setup
    • Apply electrodes according to 10-20 system
    • Configure sampling rate ≥ 256 Hz
    • Implement 50 Hz notch filter for power line noise removal
    • Apply bandpass filter (1-50 Hz) to capture relevant frequency bands
  • Real-Time Artifact Removal

    • Method A: Independent Component Analysis (ICA) with wavelet analysis
    • Method B: Dual-threshold blink detection (characteristic shape recognition)
    • Implement lateral eye movement detection using phase difference analysis between F7-F8
    • Validate artifact removal against ground truth EOG signals
  • Feature Extraction and Emotion Classification

    • Extract stable features across sessions (minimum 5-minute baseline)
    • Apply feature smoothing (Linear Dynamic Systems or Savitzky-Golay)
    • Implement dimensional reduction through filter-based methods
    • Classify emotions into three affective states (positive, neutral, negative)
  • System Integration and Validation

    • Establish real-time communication between EEG system and robot
    • Implement emotion adaptation algorithms in robot behavior controller
    • Validate with 5+ participants in controlled HRI scenario
    • Measure classification accuracy and system response latency

Data Analysis:

  • Calculate artifact removal efficiency (signal-to-noise ratio improvements)
  • Measure emotion classification accuracy under subject-dependent and subject-independent paradigms
  • Quantify processing pipeline latency to ensure real-time performance (<100ms total latency)
Protocol: Eye Artifact-Based Robot Control for Assistive Applications

Objective: Develop and validate a BCI for robot control using eye artifacts for users with neurodegenerative disorders [10].

Materials:

  • TMSi SAGA 64+ EEG system or equivalent
  • TIAGo assistive robot or similar platform
  • Graphical user interface for command selection
  • Real-time signal processing environment

Procedure:

  • Eye Artifact Detection Setup
    • Configure prefrontal cortex electrode placement (Fp1, Fp2, F7, F8)
    • Implement dual-threshold blink detection algorithm
    • Develop lateral eye movement detection using ordered peak-valley formation
    • Establish phase difference analysis for left/right discrimination
  • Command Structure Implementation

    • Map single blinks to selection commands
    • Map double blinks to navigation commands
    • Map quadruple blinks to emergency stop functions
    • Implement lateral movements for cursor control
  • Real-Time System Integration

    • Develop virtual timestamp algorithm for event detection
    • Establish communication protocol between BCI and robot controller
    • Create state machine for handling command sequences
    • Implement safety protocols for misinterpreted commands
  • Validation Methodology

    • Recruit 5 participants for system validation
    • Design task-based evaluation (object retrieval, navigation)
    • Measure success rate of command recognition
    • Quantify task completion time and error rates

Data Analysis:

  • Calculate detection accuracy for each eye artifact type
  • Measure information transfer rate (bits per minute)
  • Assess user satisfaction through standardized questionnaires
  • Compare task performance with and without artifact rejection

Visualization of HRI Processing Workflows

Real-Time EEG Processing Pipeline

EEGPipeline cluster_artifact Artifact Removal Methods EEGInput EEG Signal Acquisition Preprocessing Preprocessing EEGInput->Preprocessing ArtifactRemoval Artifact Removal Preprocessing->ArtifactRemoval FeatureExtraction Feature Extraction ArtifactRemoval->FeatureExtraction ICA ICA Methods ArtifactRemoval->ICA Threshold Dual-Threshold ArtifactRemoval->Threshold Classification Classification FeatureExtraction->Classification RobotControl Robot Control Classification->RobotControl Wavelet Wavelet Analysis ICA->Wavelet PhaseAnalysis Phase Analysis Threshold->PhaseAnalysis

Hybrid Intelligent Computing Model Architecture

HICM cluster_optimization Hybrid Optimization UserQuery Human Query Input DecisionSupport Decision Support System UserQuery->DecisionSupport ServiceMatching Service Matching DecisionSupport->ServiceMatching Optimization Optimization Engine ServiceMatching->Optimization RobotResponse Robot Service Response Optimization->RobotResponse TabuSearch Tabu Search Optimization->TabuSearch Annealing Simulated Annealing Optimization->Annealing NonConvergent Non-Convergent Optimization TabuSearch->NonConvergent Annealing->NonConvergent

Research Reagent Solutions

Table 3: Essential Research Materials for Real-Time HRI Systems

Category Specific Solution Function in HRI Research
Computing Framework Hybrid Intelligent Computing Model (HICM) Reduces calculation time by 8.67% and service time by 15.09% through combined annealing and Tabu Search optimization [9]
EEG Acquisition System TMSi SAGA 64+ (64+ channels) Provides high-temporal resolution (≤1ms) EEG acquisition with 10-20 electrode placement for non-invasive brain activity monitoring [10]
Artifact Removal Algorithm Dual-threshold blink detection with phase analysis Enables real-time EOG artifact removal by combining characteristic shape recognition (blinks) and opposite-phase signals at F7/F8 (lateral movements) [10]
Signal Processing Toolbox Real-time ICA with wavelet analysis Removes electro-oculographic artifacts while preserving valuable EEG information through component separation and reconstruction [3]
Emotion Estimation Model Multi-feature classification with smoothing Combines stable features from beta/gamma bands (16-50 Hz) with feature smoothing (LDS or Savitzky-Golay) for real-time emotion classification [3]
Robot Control Interface Eye artifact-based BCI command system Translates detected eye artifacts (blinks, lateral movements) into robot control commands through virtual timestamps and state machines [10]
Performance Validation Suite Standardized query sets and task metrics Enables comparative performance analysis across computational frameworks using calculation time, service time, and failure rates [9]

Artifacts—unwanted signals originating from non-neural or non-behavioral sources—represent a significant challenge in human-robot interaction (HRI) research. These intrusive signals can severely degrade the performance of systems designed for emotion estimation and intent decoding, ultimately undermining the naturalness and effectiveness of human-robot collaboration. The pursuit of real-time HRI necessitates robust artifact removal pipelines that can operate under strict temporal constraints without compromising signal integrity. This application note synthesizes current research and provides detailed protocols for addressing artifact-related challenges in key HRI applications, with particular emphasis on neuroscientific and multimodal interaction contexts.

Quantitative Impact of Artifacts and Mitigation Performance

The tables below summarize empirical findings on the effects of artifacts and the performance of various mitigation strategies across different HRI modalities.

Table 1: Impact of Metal Artifacts on CT Image Quality and BMD Measurement Accuracy (Adapted from [11])

Reconstruction Method Mean Attenuation (HU) Signal-to-Noise Ratio (SNR) Artifact Index (AI) Bone Mineral Density (BMD) Accuracy
Conventional Imaging (CI) 583.6 Highest Highest Low
O-MAR Significantly Reduced Moderate Reduced High (Comparable to CI*)
VMI (200 keV) Significantly Reduced Lowest Lowest Low
VMI + O-MAR Significantly Reduced Lowest Lowest Moderate
CI* (Gold Standard) Reference

Table 2: Emotion and Gesture Decoding Accuracy in Multimodal HRI (Data from [12])

Modality Model Used Classification Target Accuracy
Touch + Sound Support Vector Machine (SVM) 10 Emotions 40.0%
Touch Only Not Specified 10 Emotions Lower than multimodal
Sound Only Not Specified 10 Emotions Lower than multimodal
Touch + Sound CNN-LSTM 6 Social Touch Gestures 90.74%

Table 3: Key Artifact Types and Their Characteristics in HRI Research

Artifact Type Primary Source Impacted HRI Modality Common Mitigation Strategies
Electro-oculographic (EOG) Eye movements, blinking EEG-based Emotion Estimation Real-time ICA, Wavelet analysis [3]
Metal Artifacts Orthopedic implants, screws CT-based Bone Density Assessment O-MAR algorithm, Virtual Monoenergetic Imaging (VMI) [11]
Muscle Artifacts Head/body movements EEG-based Emotion Estimation Frequency filtering (e.g., 1-50 Hz band) [3]
Background Noise Electrical interference EEG, Tactile Sensing Notch filters (e.g., 50Hz) [3]

Experimental Protocols for Artifact Handling in HRI

Protocol: Multimodal Emotion and Gesture Decoding via Touch and Sound

Objective: To reliably decode human emotions and social gestures through tactile and auditory signals in HRI [12].

Materials:

  • A social robot (e.g., Pepper robot) equipped with:
    • Custom piezoresistive pressure sensor for touch data acquisition.
    • Microphone for capturing auditory feedback from touch gestures.
  • Data recording and processing software.

Procedure:

  • Participant Recruitment: Recruit a sufficient number of participants (e.g., n=28).
  • Emotion Expression Task: Instruct participants to convey ten distinct emotions (e.g., anger, happiness, sadness, sympathy, fear, disgust) to the robot using spontaneous touch gestures. Do not provide predefined gesture options to avoid bias.
  • Social Gesture Task: Instruct participants to perform six predefined social touch gestures (e.g., "pat," "scratch," "stroke," "tickle," "slap," "squeeze").
  • Data Collection: Simultaneously record time-synchronized data from the tactile sensor and microphone for all trials.
  • Feature Extraction: From the collected data, extract features related to pressure, duration, frequency, and location for tactile data, and relevant acoustic features for sound data.
  • Model Training and Validation:
    • For emotion classification, train a Support Vector Machine (SVM) model using the multimodal (tactile + auditory) feature set. Validate using cross-validation.
    • For gesture classification, train a Convolutional Neural Network - Long Short-Term Memory (CNN-LSTM) model on the multimodal data. Validate using cross-validation.

Protocol: Real-Time EEG Artifact Removal for Emotion Estimation

Objective: To remove artifacts from EEG signals in real-time for robust emotion estimation in affective HRI [3].

Materials:

  • A wearable EEG headset with at least 8 electrodes (recommended: AF3, AF4, T7, T8, TP7, TP8, P7, P8).
  • A computing system capable of real-time signal processing.

Procedure:

  • Signal Acquisition: Set up the EEG system to record signals from the selected electrodes. Apply a bandpass filter (e.g., 1-50 Hz) during acquisition to partially remove muscle artifacts and high-frequency noise.
  • Real-Time Artifact Removal:
    • 50 Hz Noise Removal: Apply a notch filter (e.g., IIR-based) to suppress power line interference.
    • EOG Artifact Removal: Implement a real-time artifact removal method, such as:
      • ICA-based methods: Use algorithms like Adaptive Mixture ICA (AMICA) for online decomposition and removal of ocular components.
      • Wavelet-enhanced methods: Apply wavelet transforms to identify and correct for artifact-corrupted signal segments.
  • Feature Extraction: From the cleaned EEG signals, compute features in the frequency domain (e.g., power spectral density in beta and gamma bands) known to be effective for emotion estimation.
  • Feature Smoothing: Apply a smoothing technique (e.g., Linear Dynamic Systems, Moving Average) to the extracted features to reduce temporal variability.
  • Emotion Classification: Use a lightweight classifier (e.g., SVM) to estimate the emotional state (e.g., positive, negative, neutral) based on the smoothed features, ensuring the entire pipeline operates within real-time constraints.

Protocol: Metal Artifact Reduction for Peri-Implant Bone Assessment

Objective: To improve image quality and accuracy of Bone Mineral Density (BMD) measurements in QCT scans affected by metal implant artifacts [11].

Materials:

  • A dual-layer spectral detector CT scanner.
  • Porcine femoral specimens with inserted metal screws (or similar phantoms).
  • A calibrated quantitative CT phantom for BMD measurement.
  • Image processing workstation with O-MAR and VMI reconstruction capabilities.

Procedure:

  • Sample Preparation: Insert three pure titanium surgical screws into each porcine femoral specimen (proximal metaphyseal and distal diaphyseal regions).
  • Image Acquisition: Scan the specimens using a standardized CT protocol (e.g., 120 kVp, 250 mAs).
  • Image Reconstruction: Reconstruct the images using five different methods:
    • Conventional Imaging (CI)
    • CI with Orthopedic Metal Artifact Reduction (O-MAR)
    • Virtual Monoenergetic Images (VMI) at a high energy level (e.g., 200 keV)
    • VMI combined with O-MAR (VMI + O-MAR)
    • CI after metal implant removal (CI*, serving as the gold standard)
  • Quantitative Analysis:
    • Image Quality: On an axial slice, place Regions of Interest (ROIs) in cancellous bone adjacent to the metal implant. For each ROI and reconstruction method, record the mean attenuation (HU) and its standard deviation (SD). Calculate the Signal-to-Noise Ratio (SNR = μ/SD) and Artifact Index (AI).
    • BMD Accuracy: Using a QCT post-processing workstation, place ROIs at standardized distances from the implant periphery (e.g., 0mm, 7mm, 14mm) on both axial and sagittal planes. Record the BMD values for each reconstruction method.
  • Statistical Analysis: Perform statistical tests (e.g., paired t-tests, Kruskal-Wallis) to compare HU, SNR, AI, and BMD values across the different reconstruction groups.

Workflow Visualization for Artifact Management in HRI

artifact_workflow cluster_modality Multimodal Data Streams cluster_artifact_removal Real-Time Artifact Removal start Start HRI Data Acquisition eeg EEG Signal start->eeg touch Tactile Sensor start->touch audio Audio Signal start->audio ct CT Imaging start->ct eeg_clean EOG/Muscle/Noise Removal eeg->eeg_clean touch_clean Touch/Sound Feature Fusion touch->touch_clean audio->touch_clean ct_clean O-MAR + VMI Processing ct->ct_clean feature_ext Feature Extraction eeg_clean->feature_ext touch_clean->feature_ext ct_clean->feature_ext model Emotion/Intent Decoding Model feature_ext->model output Robot Response model->output

HRI Artifact Management Workflow

eeg_pipeline cluster_filters Artifact Removal Stack raw_eeg Raw EEG Signal notch Notch Filter (50Hz) raw_eeg->notch bandpass Bandpass Filter (1-50Hz) notch->bandpass ica ICA for EOG Removal bandpass->ica wavelet Wavelet Analysis ica->wavelet cleaned_eeg Artifact-Reduced Signal wavelet->cleaned_eeg features Feature Extraction (PSD: Beta/Gamma Bands) cleaned_eeg->features smoothing Feature Smoothing (LDS/Moving Average) features->smoothing classification Emotion Classification (SVM) smoothing->classification hri_feedback HRI Adaptation classification->hri_feedback

EEG Artifact Removal Pipeline

The Scientist's Toolkit: Essential Research Reagents and Materials

Table 4: Key Research Reagent Solutions for HRI Artifact Management Studies

Item Function/Application Example/Specification
Dual-Layer Spectral Detector CT Enables Virtual Monoenergetic Imaging (VMI) for metal artifact reduction in peri-implant bone assessment [11]. Philips IQon system
O-MAR Algorithm Projection-based correction algorithm that segments and "inpaints" metal-corrupted sinogram data to reduce artifacts in CT [11]. Philips Healthcare implementation
Piezoresistive Pressure Sensor Custom tactile sensor for measuring pressure, location, and duration of touch gestures in social HRI [12]. Custom-built for robot integration
Wearable EEG System For acquiring neural signals for emotion estimation; requires specific electrode placement for optimal results [3]. Minimum 8 electrodes (AF3, AF4, T7, T8, TP7, TP8, P7, P8)
Real-Time ICA Software Independent Component Analysis for blind source separation and removal of EOG artifacts from EEG streams [3]. Adaptive Mixture ICA (AMICA)
Support Vector Machine (SVM) A lightweight, effective classifier for real-time emotion estimation from EEG or multimodal features [12] [3]. Linear or RBF kernel
CNN-LSTM Model Hybrid deep learning architecture for temporal sequence classification, effective for social touch gesture recognition [12]. Used for gesture classification from tactile-audio data

Wearable electroencephalography (EEG) is revolutionizing neurotechnology by enabling brain monitoring in real-world environments, from clinical trials to human-robot interaction (HRI) research. However, this transition from controlled laboratory settings to dynamic ecological environments introduces two persistent technical challenges: the use of dry electrodes and the management of motion artifacts. Dry electrodes, while offering superior portability and ease of use, often compromise signal quality due to higher and more unstable electrode-skin impedance. Simultaneously, motion artifacts—induced by subject movement, environmental noise, and the inherent limitations of mobile acquisition—corrupt the neurological signal of interest, complicating real-time analysis. For HRI applications, where low-latency, reliable brain signal decoding is crucial for seamless and safe interaction, overcoming these challenges is paramount. This document provides application notes and experimental protocols to address these specific issues, framed within the context of real-time artifact removal for HRI research.

Quantitative Comparison of Dry-Electrode EEG Performance

Recent benchmarking studies provide critical quantitative data on the performance of dry-electrode EEG systems compared to standard wet EEG in a clinical trial context. The following tables summarize key findings on system burden and signal quality.

Table 1: Comparison of Operational Burden between Dry-Electrode and Standard EEG Systems

Metric Standard EEG (Wet) Dry-EEG Device A (DSI-24) Dry-EEG Device B (Quick-20r) Dry-EEG Device C (zEEG)
Median Set-up Time Benchmark (Longest) ~50% faster than Standard EEG [13] Significantly faster [13] Significantly faster [13]
Median Clean-up Time Benchmark (Longest) Significantly faster [13] Significantly faster [13] Significantly faster [13]
Technician Ease of Set-up (0-10) 7 9 [13] 7 [13] 7 [13]
Technician Ease of Clean-up (0-10) 5 9 [13] 9 [13] 9 [13]
Participant Comfort Highest (Benchmark) Matched or lower than Standard EEG [13] Matched or lower than Standard EEG [13] Matched or lower than Standard EEG [13]

Table 2: Signal Quality Performance of Dry-Electrode EEG Across Functional Domains

EEG Application / Signal Aspect Dry-EEG Performance vs. Standard EEG Notes and Challenges
Resting-State Quantitative EEG Adequately captured [13] Suitable for power spectrum analysis.
P300 Evoked Potential Adequately captured [13] Reliable for event-related potential studies.
Low-Frequency Activity (< 6 Hz) Notable challenges [13] Prone to contamination from motion and drift.
Induced Gamma Activity (40-80 Hz) Notable challenges [13] Susceptible to muscle artifact contamination.
Action Anticipation (MRCP/BP) Feasible with advanced processing [14] [15] Early BP (~1.5s before movement) is low amplitude and difficult to detect.
Individual Finger Movement Decoding Feasible with deep learning [15] Achieved ~80% accuracy for binary classification in real-time BCI [15].

Experimental Protocols for Validation and Testing

Protocol for Benchmarking Dry-Electrode EEG Systems

Objective: To quantitatively evaluate the performance of a dry-electrode EEG device against a standard wet-EEG system in a context relevant to HRI research.

Materials:

  • The dry-electrode EEG system under test.
  • A standard wet-EEG system (e.g., QuikCap Neo Net with Grael amplifier) as a benchmark [13].
  • A cohort of healthy participants (n ≥ 15 recommended for initial studies).
  • Computing equipment with recording and analysis software.

Procedure:

  • Experimental Design: Employ a within-subjects design where each participant is tested with both the dry and standard EEG systems on separate days, counterbalancing the order [13].
  • Task Battery: Administer a series of standardized tasks:
    • Eyes-open/closed resting-state: For quantitative background EEG analysis [13].
    • Auditory Oddball Paradigm: To elicit P300 evoked responses [13].
    • Motor Execution/Imagery Paradigm: Include self-paced finger, hand, or arm movements (or kinesthetic motor imagery) to assess the decoding of movement intention, critical for HRI [14] [15].
  • Data Collection:
    • Record set-up and clean-up times.
    • Administrate standardized comfort and usability questionnaires to participants and technicians after each session [13].
    • Synchronize task triggers with EEG recording.
  • Data Analysis:
    • Compare set-up/clean-up times and questionnaire responses between systems (Table 1).
    • Compute quantitative EEG metrics (e.g., power spectral density, signal-to-noise ratio for P300, decoding accuracy for motor tasks) for both systems and assess correlation and equivalence (Table 2).

Protocol for Real-Time Artifact Removal in HRI Scenarios

Objective: To implement and validate a pipeline for the real-time detection and removal of motion and ocular artifacts during a dynamic human-robot interaction task.

Materials:

  • A wearable EEG system (dry or wet).
  • Auxiliary inertial measurement units (IMUs) attached to the participant's head/hands (recommended).
  • A robotic platform (e.g., collaborative robot arm, robotic hand).
  • A real-time data processing platform (e.g., MATLAB Simulink, LabStreamingLayer, Python with MNE).

Procedure:

  • Task Design: Develop an HRI task that induces typical artifacts, such as:
    • Collaborative Assembly: The participant and robot alternately place objects in a structure, requiring the participant to reach, grasp, and look at the robot.
    • Teleoperation: The participant controls a remote robot via BCI, involving body movements and visual tracking.
  • Data Acquisition: Record simultaneous EEG, IMU (motion data), and task markers.
  • Real-Time Processing Pipeline:
    • Filtering: Apply a bandpass filter (e.g., 1-40 Hz) and a notch filter (50/60 Hz) to remove line noise [3].
    • Artifact Detection: Implement one or more of the following:
      • Thresholding: Flag segments where amplitude exceeds a pre-defined threshold (e.g., ±100 µV).
      • IMU-based Rejection: Use accelerometer/gyroscope data from IMUs to identify periods of significant head movement [16].
      • Algorithmic Detection: Employ optimized algorithms like Automatic Subspace Reconstruction (ASR) or blind source separation (ICA) for online ocular and muscular artifact correction [3] [16].
      • Machine Learning: Utilize a pre-trained classifier (e.g., on features like scalp topography) to identify specific artifacts like eye blinks [17].
    • Artifact Removal: Apply the correction (e.g., ASR, ICA component removal) or simply flag the contaminated data segment for the robot's controller to ignore.
  • Validation:
    • Offline: Compare the cleaned signal with a simultaneously recorded high-fidelity EEG.
    • Online: Assess the performance of the HRI task (e.g., success rate, task completion time) with and without the real-time artifact removal pipeline active.

Signaling Pathways and Workflow Diagrams

Dry-Electrode EEG Evaluation Workflow

G Start Study Initiation P1 Participant Recruitment (n ≥ 15) Start->P1 P2 Counterbalanced Device Testing (Dry-EEG vs. Standard EEG) P1->P2 P3 Administer Task Battery P2->P3 T1 Resting State EEG P3->T1 T2 Auditory P300 Task P3->T2 T3 Motor Execution/Imagery Task P3->T3 P4 Collect Operational Data (Timing, Questionnaires) T1->P4 T2->P4 T3->P4 P5 Quantitative Signal Analysis P4->P5 A1 Power Spectral Density P5->A1 A2 ERP Analysis (P300) P5->A2 A3 Decoding Accuracy P5->A3 P6 Performance Benchmarking & Report Generation A1->P6 A2->P6 A3->P6

Real-Time Artifact Removal Pipeline for HRI

G Start Raw EEG & IMU Data Stream F1 Preprocessing (Bandpass & Notch Filter) Start->F1 F2 Artifact Detection F1->F2 D1 Amplitude Thresholding F2->D1 D2 IMU Motion Detection F2->D2 D3 ML Classifier (e.g., Eye Blink) F2->D3 F3 Artifact Removal/Correction D1->F3 D2->F3 D3->F3 R1 ASR/ICA Cleaning F3->R1 R2 Data Segment Flagging F3->R2 End Clean EEG to BCI Decoder & Robot Controller R1->End R2->End

The Scientist's Toolkit: Key Reagents and Materials

Table 3: Essential Research Tools for Wearable EEG and HRI Experiments

Item Function/Application Examples & Notes
Dry-Electrode EEG Systems Mobile EEG acquisition for real-world HRI. CGX Quick-20r, Wearable Sensing DSI-24, Zeto zEEG [13]. Vary in set-up speed, comfort, and signal performance.
Standard Wet-EEG System Gold-standard benchmark for validation studies. Systems with QuikCap Neo Net and Grael amplifier [13]. Essential for controlled comparison of dry-EEG performance.
Inertial Measurement Units (IMUs) Monitoring head and body movement for motion artifact detection. Can be integrated or external. Provides objective data to correlate with motion artifacts in EEG [16].
Robotic Platforms End-effector for HRI tasks and BCI control. Collaborative robot arms (e.g., UR3e), dexterous robotic hands [15]. Platform choice depends on the HRI paradigm.
Real-Time Processing Software Platform for implementing artifact removal pipelines. MATLAB/Simulink, LabStreamingLayer (LSL), Python with MNE-real-time [3] [15].
Public EEG Datasets Algorithm development and benchmarking. SEED (for emotion) [3], MI-based BCI datasets [14]. Crucial for training machine learning models like artifact classifiers.
Deep Learning Frameworks Decoding motor commands and improving artifact removal. EEGNet [15], Convolutional Neural Networks (CNNs) [14] [18]. Enable end-to-end decoding of complex intentions (e.g., individual finger movements).

A Technical Deep Dive: State-of-the-Art Artifact Removal Algorithms

Blind Source Separation (BSS) represents a cornerstone of modern signal processing, enabling the recovery of underlying source signals from their observed mixtures without prior knowledge of the mixing process. In the context of real-time artifact removal for human-robot interaction (HRI) research, BSS techniques are indispensable for processing electrophysiological signals, particularly electroencephalography (EEG). HRI environments present unique challenges for signal acquisition, where artifacts originating from muscle activity, eye movements, cable swings, and magnetic induction significantly compromise signal integrity [19]. These artifacts must be effectively separated and removed to ensure accurate interpretation of neural signals for brain-computer interface (BCI) applications. Among the diverse BSS algorithms, Independent Component Analysis (ICA) and the Second-Order Blind Identification (SOBI) algorithm have emerged as fundamental workhorses, each offering distinct advantages for specific HRI scenarios.

ICA operates on the principle of maximizing the statistical independence of component signals, typically by minimizing mutual information or maximizing non-Gaussianity [20] [21]. The core mathematical model assumes that observed signals ( \mathbf{X} ) are linear mixtures of statistically independent sources ( \mathbf{S} ), related via a mixing matrix ( \mathbf{A} ) (( \mathbf{X} = \mathbf{A} \mathbf{S} )). The objective is to estimate a separating matrix ( \mathbf{W} ) that recovers the original sources (( \mathbf{S} = \mathbf{W} \mathbf{X} )) [20]. In practical terms, ICA effectively solves the "cocktail party problem"—separating individual voices from recorded mixtures—making it exceptionally suitable for isolating neural signals from artifact-contaminated EEG data [21]. Its strength lies in identifying and removing artifacts embedded within the data without discording valuable neurological information, thereby preserving the continuity of brain signals essential for real-time HRI systems [22].

SOBI, in contrast, leverages a different statistical property by exploiting the time structure of sources, specifically their autocorrelation functions [22]. It operates under the assumption that source signals are temporally correlated but mutually uncorrelated with different time lags. By performing joint diagonalization of several covariance matrices computed at different time lags, SOBI can separate sources with distinct spectral characteristics. This methodological approach proves particularly effective for separating artifacts with pronounced periodic components, such as power line interference, channel noise, and certain movement artifacts common in dynamic HRI environments.

Table 1: Core Algorithmic Characteristics of ICA and SOBI

Feature Independent Component Analysis (ICA) Second-Order Blind Identification (SOBI)
Statistical Principle Maximizes non-Gaussianity/statistical independence Exploits temporal structure & autocorrelation
Separation Basis Higher-order statistics Second-order statistics (covariance matrices)
Optimal Use Cases Artifact removal (eye blinks, muscle activity), source localization [20] [22] Periodic noise, channel-specific artifacts, colored noise environments [22]
Computational Load Moderate to High Generally Lower
Real-Time Suitability Moderate (requires adaptations like ORICA) [23] High

Algorithm Implementation and Workflow

The implementation of ICA for artifact removal follows a systematic pipeline comprising data preparation, decomposition, component identification, and signal reconstruction. A critical preprocessing step for most ICA algorithms is whitening (or sphering), which removes correlations between channels and normalizes their variances, effectively transforming the data into an uncorrelated unit-variance space [20]. Geometrically, this process restores the data's spherical structure, simplifying the subsequent ICA step to identifying an appropriate rotation of the whitened data. Whitening not only standardizes the data but also dramatically improves the convergence speed and stability of ICA algorithms.

Following whitening, the core ICA algorithm estimates the separating matrix that maximizes the independence of the resulting components. Multiple ICA variants exist, including Infomax ICA, FastICA, and extended algorithms capable of identifying both sub-Gaussian and super-Gaussian sources [22] [21]. The Infomax algorithm, for instance, implemented in tools like EEGLAB's runica, employs a natural gradient approach to minimize the mutual information between output channels, effectively maximizing their statistical independence [22]. During decomposition, the algorithm iteratively adjusts the separating matrix until a convergence criterion is met, producing a set of independent components along with their corresponding time courses and spatial topographies.

Table 2: ICA Algorithms and Their Applications in HRI Research

Algorithm Key Mechanism Advantages for HRI
Infomax ICA Maximizes information transfer (mutual information minimization) via gradient descent Default in EEGLAB; effective for standard EEG artifact separation [22]
FastICA Fixed-point iteration to maximize negentropy (non-Gaussianity) Faster convergence; computationally efficient [21]
SOBI Joint diagonalization of covariance matrices at multiple time lags Effective for correlated noise; robust to certain motion artifacts [22]
ORICA (Online ICA) Recursive, sample-by-sample update of unmixing matrix Enabled for real-time processing in dynamic HRI settings [23]

The subsequent workflow for artifact removal involves meticulous analysis of the derived components. Researchers must identify which components correspond to neural activity and which represent artifacts. This classification relies on evaluating multiple characteristics: (1) spatial topography—artifact components often exhibit characteristic scalp distributions (e.g., frontal focus for ocular artifacts, periocular or neck muscle patterns for EMG); (2) temporal dynamics—artifact components typically show time courses reflecting their non-neural origin (e.g., pulse-like patterns for eye blinks, high-frequency bursts for muscle activity); and (3) spectral properties—artifact components often display distinctive power spectral densities (e.g., slow drifts for movement artifacts, high-frequency content for muscle noise) [22]. After identifying artifactual components, they can be subtracted from the original signal by projecting only the neural components back to the sensor space, resulting in cleaned data suitable for subsequent analysis in HRI systems.

ICA_Workflow Start Raw Multichannel EEG Data Whitening Data Whitening Start->Whitening ICA ICA Decomposition Whitening->ICA Inspection Component Inspection ICA->Inspection Classification Artifact Classification Inspection->Classification Reconstruction Signal Reconstruction Classification->Reconstruction End Cleaned EEG Data Reconstruction->End

ICA-Based Artifact Removal Workflow

Successful implementation of ICA and SOBI for HRI research requires both specialized software tools and appropriate hardware configurations. The computational demands of these algorithms, particularly for real-time applications, necessitate a robust processing environment.

Table 3: Essential Software Tools for BSS Implementation

Tool/Resource Function Implementation Platform
EEGLAB Interactive environment for ICA; includes runica, FastICA, SOBI [22] MATLAB
MNE-Python Open-source Python package for EEG/MEG analysis Python
ORICA Plugin Enables real-time, recursive ICA for online processing [23] EEGLAB/MATLAB
FastICA Package Efficient implementation of FastICA algorithm R, Python, MATLAB [21]
SpyICA Toolbox Contains Python implementation of ORICA Python [23]

For experimental data acquisition in HRI contexts, wearable EEG systems with dry or semi-dry electrodes are typically employed due to their practicality and rapid setup [16]. However, these systems present specific challenges for BSS, including a reduced number of channels (typically below 16), which can impair the efficacy of source separation methods like ICA and SOBI [16] [19]. Successful decomposition generally requires the number of sensors to equal or exceed the number of independent sources to be identified; therefore, low-density systems may struggle to separate neural signals from multiple concurrent artifacts. Furthermore, artifacts in wearable EEG exhibit specific features due to dry electrodes, reduced scalp coverage, and subject mobility, necessitating tailored processing pipelines that explicitly address these peculiarities [16].

Experimental Protocols and Validation Metrics

Protocol for ICA-Based Artifact Removal in HRI Research

Step 1: Data Preparation and Preprocessing Acquire EEG data using a wearable system with appropriate channel configuration (≥16 channels recommended for effective decomposition). Apply a high-pass filter (e.g., 1 Hz cutoff) to remove slow drifts and a low-pass filter (e.g., 50-60 Hz) to reduce high-frequency noise. Identify and interpolate severely noisy channels. For ICA, it is crucial to use continuous, unfiltered (or minimally filtered) data, as aggressive filtering can alter the statistical properties essential for successful source separation [22].

Step 2: Execute ICA Decomposition Select an appropriate ICA algorithm (e.g., Infomax for general use, FastICA for speed, ORICA for real-time applications). The data matrix ( \mathbf{X} ) (channels × time points) is decomposed such that ( \mathbf{S} = \mathbf{W} \mathbf{X} ), where ( \mathbf{W} ) is the unmixing matrix and ( \mathbf{S} ) contains the independent components. Ensure sufficient data length is available for stable decomposition; as a rule of thumb, the number of data points (time samples) should be at least the square of the number of channels [22].

Step 3: Component Identification and Classification Visualize component properties using tools like EEGLAB's iclabel or similar automated classifiers. Inspect:

  • Topographic maps: Identify components with distributions characteristic of artifacts (e.g., frontal polarity for eye blinks, temporal/neck patterns for EMG).
  • Time courses: Look for patterns correlating with movement or other non-neural events.
  • Power spectra: Identify components dominated by high-frequency content (muscle) or low-frequency drifts (movement).
  • Fingerprint plots: Utilize multi-attribute visualizations that display features like clustering, autocorrelation, and frequency distribution to differentiate neural from artifactual components [24].

Step 4: Artifact Removal and Signal Reconstruction After identifying artifactual components (e.g., components 1, 2, and 4 in a hypothetical decomposition), reconstruct the cleaned data by projecting all components except the artifactual ones back to the sensor space. This is mathematically achieved by creating a modified component matrix ( \mathbf{S}{clean} ) where the rows corresponding to artifacts are set to zero, and then computing ( \mathbf{X}{clean} = \mathbf{W}^{-1} \mathbf{S}_{clean} ), where ( \mathbf{W}^{-1} ) is the inverse of the unmixing matrix (equivalent to the mixing matrix ( \mathbf{A} )) [20].

Performance Validation Metrics

Validating the efficacy of artifact removal is crucial for ensuring signal quality in HRI applications. The following metrics are commonly employed:

  • Accuracy: Percentage of correctly identified artifact components when a clean reference signal is available [16].
  • Selectivity: The ability to remove artifacts while preserving neural signals of interest, assessed with respect to physiological signal integrity [16].
  • Signal-to-Noise Ratio (SNR) Improvement: Measured by comparing SNR before and after artifact removal in task-related data.
  • Mutual Information Reduction: Quantifying the decrease in statistical dependence between channels after processing.

Validation Metric1 Accuracy Ref1 Clean Reference Signal Metric1->Ref1 Metric2 Selectivity Ref2 Physiological Signal Integrity Metric2->Ref2 Metric3 SNR Improvement Ref3 Task-Related Data Metric3->Ref3 Metric4 Mutual Information Reduction Ref4 Inter-Channel Dependence Metric4->Ref4

Key Validation Metrics for Artifact Removal

Real-Time Implementation for Human-Robot Interaction

The transition from offline analysis to real-time artifact removal presents significant challenges and opportunities for HRI research. Traditional ICA algorithms are computationally intensive and typically operate on complete datasets, making them unsuitable for real-time applications where low latency is critical. However, recent advancements have enabled online ICA implementations that update the decomposition recursively as new data arrives.

The Online Recursive Independent Component Analysis (ORICA) algorithm represents a breakthrough in this domain, enabling real-time source separation suitable for dynamic HRI environments [23]. ORICA employs a recursive update rule for the unmixing matrix, allowing it to adapt to non-stationary signal statistics—a common characteristic in EEG data during human-robot interaction. Implementation options include the original ORICA plugin for EEGLAB and Python implementations within the SpyICA toolbox [23].

For real-time BCI applications in HRI, a typical processing pipeline integrates ORICA with other processing modules:

  • Data Acquisition: Continuous EEG streaming from wearable headset.
  • Preprocessing: Basic filtering and normalization.
  • Online Artifact Removal: ORICA continuously updates component estimates and removes identified artifactual components.
  • Feature Extraction: Derivation of relevant neural features (e.g., band power, event-related potentials).
  • Classification: Translation of neural features into control commands for the robot.

This integrated approach enables robust BCI performance even in the presence of motion artifacts and other noise sources inherent to interactive environments. Studies have demonstrated that such pipelines can effectively handle artifacts arising from natural movements during HRI tasks, including walking, gesturing, and collaborative manipulation activities [19].

Comparative Analysis and Future Directions

While both ICA and SOBI offer powerful blind source separation capabilities, their relative performance varies across different HRI scenarios. ICA generally excels at separating temporally independent sources with non-Gaussian distributions, making it particularly effective for ocular artifacts, muscle activity, and cardiac interference. SOBI's strength lies in separating sources with distinct temporal autocorrelation profiles, offering advantages for removing periodic noise and certain motion artifacts.

Future developments in BSS for HRI research are likely to focus on several key areas:

  • Hybrid Approaches: Combining ICA with deep learning methods for improved artifact identification and removal, particularly for muscular and motion artifacts [16].
  • Adaptive Algorithms: Enhanced recursive algorithms capable of rapidly tracking changes in signal statistics during dynamic interactions.
  • Auxiliary Sensor Integration: Leveraging data from inertial measurement units (IMUs) and other motion sensors to improve artifact detection under ecological conditions [16].
  • Computational Efficiency: Optimization of algorithms for deployment on embedded systems with limited processing resources.

The integration of advanced BSS techniques like ICA and SOBI into the HRI research pipeline represents a critical enabling technology for developing robust, real-world brain-computer interfaces. By effectively separating neural signals from contamination sources, these methods pave the way for more natural and reliable human-robot collaboration across diverse application domains, from rehabilitation robotics to industrial human-robot teams.

The advancement of Human-Robot Interaction (HRI) relies heavily on the accurate interpretation of user states through physiological signals like Electroencephalography (EEG). However, these signals are frequently corrupted by artifacts—unwanted noise from muscular activity, eye movements, or motion—which can severely degrade system performance. Real-time artifact removal is, therefore, not merely a preprocessing step but a critical component for enabling robust and reliable HRI systems. Techniques such as Common Average Reference (CAR), Localized Regression Removal (LRR), and Adaptive Filtering have emerged as powerful solutions for this challenge, each offering distinct mechanisms for isolating and removing noise while preserving neural information. Their successful implementation allows for more accurate brain-computer interfaces, adaptive automation, and closed-loop robotic systems that can respond to user intentions with high fidelity [3] [25] [26].

The necessity for these techniques is particularly pronounced in real-world HRI applications. For instance, in robotic-assisted surgery, a surgeon's high mental workload must be accurately detected via EEG to trigger adaptive assistance, a process compromised by motion and physiological artifacts [25]. Similarly, providing sensory feedback in prosthetic hands using Transcutaneous Electrical Nerve Stimulation (TENS) introduces substantial artifacts into surface Electromyography (sEMG) signals used for control, necessitating advanced filtering for closed-loop operation [27]. This document details the application and protocols for CAR, LRR, and Adaptive Filtering, providing a structured framework for their implementation in HRI research.

Theoretical Foundations and Key Algorithms

Common Average Reference (CAR)

CAR is a spatial filtering technique that operates on the principle of re-referencing. It assumes that artifacts are common to all electrodes while neural signals are local. By subtracting the average signal of all electrodes from each individual channel, CAR effectively suppresses widespread noise.

  • Mathematical Formulation: For an EEG recording with N channels, the CAR-transformed signal Y_i(t) for channel i is given by: Y_i(t) = X_i(t) - (1/N) * Σ_{j=1}^{N} X_j(t) where X_i(t) is the original signal of the i-th channel at time t.
  • Applications in HRI: CAR is particularly useful as a preliminary step in pipelines for emotion recognition [3] [7] and error-related potential (ErrP) detection [26]. Its computational efficiency makes it suitable for real-time systems where low latency is critical.

Localized Regression Removal (LRR)

LRR is an advanced version of regression-based artifact removal that targets local spatial correlations. Instead of using a global reference like CAR, LRR estimates and removes artifacts based on the signals from a local subset of electrodes, often surrounding the target channel. This makes it more effective for artifacts with a localized topography, such as ocular artifacts.

  • Algorithmic Principle: For a given channel, the artifact is estimated via a linear regression model using signals from a predefined set of neighboring "source" channels (e.g., EOG or EMG channels). This estimated artifact is then subtracted from the target channel.
  • HRI Relevance: LRR has been successfully applied in motor imagery classification for BCIs [28] and is integral to methods like the Four Class Iterative Filtering (FCIF) for ocular artifact removal, which is crucial for controlling assistive devices [28].

Adaptive Filtering

Adaptive filters are a class of algorithms that dynamically adjust their parameters to track a non-stationary signal or noise statistics. This is ideal for HRI, where the user and environment are in constant flux. The core component is an adaptive algorithm, such as the Least-Mean-Squares (LMS), that minimizes the error between the filter's output and a desired signal.

  • Core Mechanism: An adaptive filter requires a primary input (the contaminated signal, d(n) = s(n) + v(n)) and a reference input (x(n)) that is correlated with the artifact v(n). The filter adjusts its weights w to produce an output y(n) that best estimates v(n). This estimate is then subtracted from the primary input to recover the clean signal s(n) [29] [27].
  • Wide-Ranging Applications: This technique is highly versatile. It has been used to remove TENS artifacts from sEMG in prosthetic control [27], motion artifacts from ECG [29], and motion artifacts from fNIRS signals using an accelerometer as a reference [30]. Its ability to operate in real-time with minimal signal distortion makes it a cornerstone of modern HRI systems.

Table 1: Comparison of Core Artifact Removal Techniques

Technique Underlying Principle Primary Strength Best Suited for Artifact Type Computational Load
Common Average Reference (CAR) Spatial re-referencing Simplicity and speed Global, common-mode noise Low
Localized Regression (LRR) Local linear regression Effectiveness on localized artifacts Ocular, localized muscle artifacts Medium
Adaptive Filtering Dynamic parameter adjustment Handles non-stationary noise Motion, TENS, and physiological artifacts Medium to High

Experimental Protocols and Implementation

Protocol 1: Real-Time TENS Artifact Removal for Prosthetic Control

This protocol outlines the procedure for implementing an adaptive filter to remove TENS artifacts from sEMG signals, enabling simultaneous sensory feedback and precise control of prosthetic hands [27].

  • Objective: To validate an adaptive artifact removal method that cancels TENS interference across different frequencies and pulse widths in real-time, restoring sEMG signal quality for intention estimation.
  • Experimental Setup:
    • Participants: 12 participants for offline and online validation.
    • Signal Acquisition: Four sEMG signals are collected from flexor digitorum superficialis, flexor carpi ulnaris, and extensor carpi ulnaris muscles using a wearable sEMG system.
    • Artifact Generation: A TENS system is used for sensory feedback, generating artifacts with varying frequencies and pulse widths.
  • Data Acquisition Parameters:
    • sEMG Sampling Rate: ≥ 1000 Hz.
    • TENS Parameter Ranges: Frequency 20-100 Hz, Pulse Width 50-200 µs.
  • Real-Time Processing Workflow:
    • Step 1: Signal Preprocessing. Band-pass filter the raw sEMG signal (e.g., 20-450 Hz) to remove baseline wander and high-frequency noise.
    • Step 2: Adaptive Filtering. Implement a modified LMS adaptive filter.
      • Primary Input: The TENS-contaminated sEMG signal.
      • Reference Input: The mean of previous TENS artifacts or a signal from a dedicated reference electrode.
      • Algorithm: The filter weights are updated using the sign-sign LMS rule for stability: w(n+1) = w(n) + μ * sign(e(n)) * sign(x(n)), where μ is the step size.
    • Step 3: Artifact Discrimination. Apply temporal separation logic to distinguish between stimulation and recording phases if necessary.
    • Step 4: Intention Estimation. The cleaned sEMG signal is fed into a pattern recognition or proportional control algorithm for prosthetic hand movement.
  • Validation Metrics:
    • Signal-to-Noise Ratio (SNR): Calculate before and after filtering. The protocol achieved an average increase of 10.3 dB [27].
    • Target Reaching Experiment (TRE): Assess real-time control performance using metrics like completion time and success rate, which should recover to levels observed without TENS artifact present.

G Start Start Signal Acquisition Preprocess Preprocess Raw sEMG (Band-pass Filter) Start->Preprocess AF_Input Contaminated sEMG (Primary Input, d(n)) Preprocess->AF_Input Subtract Subtract Estimated Artifact y(n) AF_Input->Subtract AF_Ref Mean of Prior Artifacts (Reference Input, x(n)) AdaptiveFilter Modified LMS Adaptive Filter AF_Ref->AdaptiveFilter AdaptiveFilter->Subtract Estimated Artifact y(n) CleanEMG Clean sEMG Signal Output Subtract->CleanEMG IntentionEst Prosthetic Intention Estimation Algorithm CleanEMG->IntentionEst

Real-Time Adaptive Filtering for Prosthetic Control

Protocol 2: Mental Workload-Based Adaptive Automation in Surgery

This protocol describes a framework for using EEG-based mental workload (MWL) assessment to trigger adaptive automation in robotic-assisted surgery (RAS), requiring robust, real-time artifact removal [25].

  • Objective: To develop a real-time multi-sensing model for detecting surgeon MWL and trigger a semi-autonomous suction tool to mitigate cognitive overload during high-demand tasks.
  • Experimental Setup:
    • Participants: 10 participants for model development; 9 surgical trainees for evaluation.
    • Apparatus: Da Vinci Research Kit (dVRK), 32-channel EEG headset (e.g., g.Nautilus), eye-tracker (e.g., Tobii Pro Glasses).
    • Task: Participants perform a needle-passing task under low MWL (non-hemorrhage) and high MWL (hemorrhage with auditory oddball task) conditions.
  • Data Acquisition Parameters:
    • EEG Sampling Rate: 250 Hz.
    • Key Electrodes: Prefrontal and temporal lobes (e.g., AF3, AF4, T7, T8).
  • Real-Time Processing and Analysis Workflow:
    • Step 1: Online Artifact Removal. A hybrid approach is employed:
      • Apply a CAR filter to remove common-mode noise.
      • Use an Adaptive Filter with an accelerometer or EOG reference to suppress motion and ocular artifacts.
      • Alternatively, employ a Blind Source Separation (BSS) method like ICA or Mutual Information with Epanechnikov kernel [7] to isolate and remove artifact components.
    • Step 2: Feature Extraction. From the cleaned EEG, compute power spectral densities in theta (4-8 Hz), alpha (8-13 Hz), and beta (13-30 Hz) bands over frontal regions.
    • Step 3: MWL Classification. A machine learning classifier (e.g., SVM or LDA) uses the spectral features to predict high/low MWL states in real-time. The developed model achieved 77.9% accuracy in predicting high MWL [25].
    • Step 4: Adaptive Trigger. When high MWL is detected, the system activates the semi-autonomous suction tool to clear the surgical field.
  • Validation Metrics:
    • Classifier Accuracy: For MWL level prediction.
    • Task Performance: Time to complete task and error rates.
    • Subjective Workload: NASA-TLX scores, which were lower with the MWL-AA system active [25].

Table 2: Performance Metrics from Cited HRI Studies

Experiment / Study Primary Metric Reported Performance Impact on HRI Task
TENS Artifact Removal [27] Signal-to-Noise Ratio (SNR) Increase +10.3 dB average Restored prosthetic control performance to no-TENS levels
MWL-AA for Surgery [25] High MWL Prediction Accuracy 77.9% Reduced surgeon workload and improved task performance
Emotion Recognition [7] Classification Accuracy 80.13% with Epanechnikov kernel Enhanced emotion estimation for responsive HRI
Motor Imagery [28] Four-Class Classification Accuracy 98.575% mean accuracy High-precision control of assistive devices

The Scientist's Toolkit: Research Reagent Solutions

Table 3: Essential Materials and Tools for HRI Artifact Removal Research

Item Name Specification / Example Function in Research
Wearable EEG System g.Nautilus (32-channel), dry-electrode systems Acquires neural data in real-time from human users in interactive scenarios [25] [31].
Auxiliary Biosensor Kit Tobii Pro Glasses 2.0 (eye-tracker), Inertial Measurement Unit (IMU) Provides reference signals (e.g., gaze, acceleration) for adaptive filtering of ocular and motion artifacts [25] [30].
Stimulation System Transcutaneous Electrical Nerve Stimulation (TENS) device Generates sensory feedback in prosthetic applications, also serving as a known source of artifact for validation [27].
Computing Platform Laptop/PC with real-time operating system (e.g., Ubuntu with ROS) Runs artifact removal algorithms and the primary HRI task with strict timing constraints.
Software Library MATLAB (Signal Processing Toolbox), Python (MNE, SciPy, PyTorch) Provides implementations of standard filters, ICA, LMS, and deep learning models for signal processing [29] [7].
Robotic Platform Da Vinci Research Kit (dVRK), prosthetic hand, humanoid robot Serves as the interactive endpoint for validating the artifact removal pipeline in a realistic HRI loop [25] [27].

Advanced Visualization of Signal Processing Workflow

The following diagram illustrates a consolidated, advanced workflow for real-time artifact removal in an HRI setting, integrating multiple techniques discussed in this document.

G RawData Raw Bio-Signals (EEG, sEMG, fNIRS) Preproc Preprocessing (Band-pass/Notch Filter) RawData->Preproc CAR Spatial Filter (CAR) Preproc->CAR AF Adaptive Filter (LMS/RLS) CAR->AF Optional Path LRR Component Analysis & Regression (ICA, LRR, Mutual Information) CAR->LRR ArtifactRef Auxiliary Reference Signals (Accelerometer, EOG, Stimulus Trigger) ArtifactRef->AF CleanSig Verified Clean Signal AF->CleanSig LRR->AF Optional Path LRR->CleanSig HRI HRI Application (ErrP Detection, Emotion Recognition, Prosthetic Control, Adaptive Automation) CleanSig->HRI

Integrated Real-Time Artifact Removal Workflow

Real-time artifact removal is a critical enabling technology for human-robot interaction (HRI) research, allowing for the study of brain dynamics during natural, whole-body movement. Electroencephalography (EEG) provides the temporal resolution necessary to investigate cortical processes during dynamic HRI tasks, but its signal quality is severely compromised by motion, muscle, and other artifacts. Artifact Subspace Reconstruction (ASR) and iCanClean represent two advanced, real-time capable pipelines that address this challenge through fundamentally different approaches. This application note provides a detailed technical overview of both methods, including quantitative performance comparisons, experimental protocols, and implementation guidelines tailored for HRI research settings.

ASR is an automated, online, component-based artifact removal method for removing transient or large-amplitude artifacts in multi-channel EEG recordings without requiring reference noise signals [32] [33]. Instead, it relies on clean calibration data to establish a baseline brain state and operates as a sliding-window channel interpolation algorithm that identifies contaminated channels and reconstructs them using uncontaminated channels via principal component analysis [33].

iCanClean is a novel cleaning algorithm that uses reference noise recordings (e.g., from IMUs or dedicated noise sensors) to remove noisy EEG subspaces through canonical correlation analysis (CCA) [34] [35]. It functions as a generalized framework for removing multiple artifact sources in real time without requiring clean calibration data or risking accidental removal of brain activity [34].

Table 1: Quantitative Performance Comparison of Artifact Removal Methods

Method Data Quality Score (All Artifacts) Required Reference Computational Efficiency Key Artifacts Addressed
No Cleaning 15.7% [34] None N/A N/A
ASR 27.6% [34] Clean calibration data [33] High [33] Motion, muscle, eye [34]
Auto-CCA 27.2% [34] None High [34] Muscle, line noise [34]
Adaptive Filtering 32.9% [34] Reference noise signals [34] Medium [34] Eye artifacts, motion (with modifications) [34]
iCanClean 55.9% [34] Reference noise signals [34] High [34] Motion, muscle, eye, line-noise [34]

Table 2: iCanClean Parameter Optimization Findings

Parameter Optimal Value Effect of Deviation Application Context
Window Length 4 seconds [36] Shorter windows may miss artifacts; longer windows reduce adaptability Mobile EEG during walking [36]
Cleaning Aggressiveness (r²) 0.65 [36] Lower values preserve brain activity; higher values remove more artifacts General mobile settings [36]
Noise Channels 16-64 channels [36] Performance gradually decreases with fewer channels High-density EEG systems [36]

Experimental Protocols

Protocol 1: ASR Implementation for Sleep EEG Studies

Application Context: Long-duration EEG recordings with minimal experimenter intervention, such as all-night sleep studies [33].

Required Equipment:

  • High-density EEG system (64+ channels recommended)
  • Standard electrode caps with saline or gel electrodes
  • EEG recording system with sufficient data storage capacity

Procedure:

  • Calibration Data Acquisition: Record 5-10 minutes of resting-state EEG in a stationary, awake position to establish the reference data [33].
  • Parameter Configuration:
    • Set the sliding window length to 3-6 times longer than the default for sleep studies to capture slow waves [33].
    • Configure the standard deviation (SD) cutoff parameter between 10-30 SDs [33].
    • Allow a maximum of 7.5% of channels to be interpolated [33].
  • Data Processing:
    • Apply ASR as a sliding-window algorithm that identifies contaminated channels in each window.
    • Reconstruct identified contaminated channels using spatial information from uncontaminated channels.
  • Quality Assessment:
    • Verify cleaned data retains expected physiological patterns (e.g., sleep spindles, K-complexes).
    • Check for over-cleaning by comparing spectral profiles to established norms.

Protocol 2: iCanClean for Mobile HRI Paradigms

Application Context: Real-time artifact removal during human-robot interaction tasks involving walking, reaching, or other whole-body movements [34] [36].

Required Equipment:

  • Dual-layer mobile EEG system with 120+ electrodes [36]
  • Inertial Measurement Units (IMUs) embedded in EEG electrodes or separate head-mounted units [35]
  • Reference noise sensors integrated into the EEG cap
  • Synchronized data acquisition system

Procedure:

  • System Setup:
    • Configure a minimum of 16 noise reference channels (64 recommended for optimal performance) [36].
    • Ensure proper synchronization between EEG, IMU, and noise sensor data streams [35].
  • Parameter Configuration:
    • Set window length to 4 seconds for walking motion artifacts [36].
    • Configure cleaning aggressiveness to r² = 0.65 [36].
    • Align sampling rates across all data streams (EEG, IMU, noise sensors).
  • Real-time Processing Pipeline:
    • Acquire EEG data concurrently with IMU and noise sensor references.
    • Apply iCanClean using canonical correlation analysis to identify and remove artifact subspaces.
    • Output cleaned EEG data with minimal latency for downstream applications.
  • Validation:
    • Quantify improvement using Data Quality Score (correlation between brain sources and EEG channels) [34].
    • For offline analysis, assess independent component analysis (ICA) decomposition quality through dipole localization and brain probability metrics [36].

Workflow Visualization

G cluster_ASR Artifact Subspace Reconstruction (ASR) Workflow cluster_iCanClean iCanClean Workflow ASR_Start EEG Data Acquisition ASR_Calibration Clean Calibration Data (5-10 min resting state) ASR_Start->ASR_Calibration ASR_Params Parameter Configuration: - SD Cutoff: 10-30 - Max 7.5% channels interpolated ASR_Calibration->ASR_Params ASR_Processing Sliding-Window Processing: - Identify contaminated channels - PCA-based reconstruction ASR_Params->ASR_Processing ASR_Output Cleaned EEG Data ASR_Processing->ASR_Output iCC_Start Multi-modal Data Acquisition iCC_NoiseRefs Noise Reference Signals: - IMU data - Dedicated noise sensors iCC_Start->iCC_NoiseRefs iCC_Params Parameter Configuration: - 4-second window - r² = 0.65 aggressiveness iCC_NoiseRefs->iCC_Params iCC_Processing Canonical Correlation Analysis: - Identify artifact subspaces - Remove noise components iCC_Params->iCC_Processing iCC_Output Cleaned EEG Data iCC_Processing->iCC_Output

Diagram 1: Comparative Workflows of ASR and iCanClean

Research Reagent Solutions

Table 3: Essential Research Materials and Tools

Item Function Example Implementation
High-Density EEG System Record electrocortical activity with sufficient spatial resolution 120+ electrode dual-layer systems [36]
Inertial Measurement Units (IMUs) Capture head motion dynamics for reference-based artifact removal 9-axis IMUs with accelerometer, gyroscope, magnetometer [35]
Reference Noise Sensors Provide dedicated noise recordings for subspace identification Integrated noise sensors in EEG cap design [34]
Artifact Subspace Reconstruction (ASR) Remove transient artifacts without reference signals Python (ASRpy) or EEGLAB implementations [32] [33]
iCanClean Algorithm Remove multiple artifact types using reference noise recordings Custom implementation based on canonical correlation analysis [34]
Synchronization System Align temporal data across multiple sensor modalities Lab Streaming Layer (LSL) or hardware triggers [35]
Mobile Phantom Head Validate artifact removal performance with ground truth Electrically conductive phantom with embedded sources [34]

Implementation Guidelines for HRI Research

For human-robot interaction studies where ecological validity and real-time processing are paramount, iCanClean offers significant advantages by effectively handling the complex artifact profiles encountered during whole-body movement and interaction with robotic systems. The method's ability to remove multiple artifact types simultaneously while preserving brain activity makes it particularly suitable for HRI paradigms that involve walking, reaching, or other dynamic movements [34].

When implementing real-time artifact removal for closed-loop HRI systems, consider the following guidelines:

  • System Integration: Ensure tight synchronization between EEG processing, robot control systems, and sensory feedback loops.
  • Computational Efficiency: Both ASR and iCanClean offer real-time capability, but verify processing latency meets the temporal requirements of your specific HRI task [34] [33].
  • Parameter Optimization: Conduct pilot studies to fine-tune parameters for your specific experimental setup and artifact profile, as optimal settings may vary across different HRI tasks [36].

For researchers working with existing stationary datasets or with limited access to reference sensors, ASR provides a powerful alternative that can significantly improve data quality without requiring additional hardware modifications [33]. The choice between these approaches should be guided by specific research questions, available equipment, and the degree of ecological validity required in the HRI paradigm.

Emerging Deep Learning and Hybrid Optimization Approaches

The advancement of real-time Human-Robot Interaction (HRI) is critically dependent on the accurate interpretation of user states, such as intention, emotion, and cognitive load. Electroencephalography (EEG) provides a non-invasive, high-temporal-resolution method for monitoring these states. However, EEG signals are persistently contaminated by various artifacts—unwanted signals from non-neural sources—which can severely degrade HRI system performance. These artifacts include those from eye movements (EOG), muscle activity (EMG), cardiac activity (ECG), and environmental noise [37] [2] [38]. Effective artifact removal is therefore a essential preprocessing step to ensure the reliability of subsequent brain signal analysis and interpretation in dynamic HRI applications, such as adaptive robot control and implicit communication [26] [39].

This document outlines emerging deep learning (DL) and hybrid optimization approaches for real-time artifact removal, framed within the context of HRI research. The focus is on providing applicable notes and detailed, reproducible protocols for researchers and scientists. These methodologies are designed to overcome the limitations of traditional techniques like Independent Component Analysis (ICA) and regression, particularly their often-inadequate performance in dynamic, real-world HRI scenarios where computational efficiency and high accuracy are paramount [40] [38].

Quantitative Performance Comparison of Emerging Approaches

The table below summarizes the key performance metrics of several state-of-the-art artifact removal methods as reported in recent literature. These quantitative results provide a benchmark for comparing the efficacy of different approaches.

Table 1: Performance Metrics of Advanced Artifact Removal Methods

Methodology Reported SNR (dB) Reported Accuracy (%) Key Metric 1 (Value) Key Metric 2 (Value) Primary Application Context
FLM-Optimized Adaptive Filtering [40] 42.042 - MSE: Low RMSE: Low General EEG Artifact Removal
AnEEG (LSTM-GAN) [37] Improved SNR & SAR - NMSE: Lower RMSE: Lower General EEG Artifact Removal
Mutual Information (Epanechnikov) [7] - 80.13 - - Emotion Recognition
Hybrid CNN-LSTM (with EMG) [38] Increased post-processing SNR - - - SSVEP Preservation during HRI
DBGS (Hardware-Software Hybrid) [41] - - SAAF: 12.77 ± 0.85 dB Correlation: 0.84 ± 0.33 sEMG Extraction during FES

Abbreviations: SNR (Signal-to-Noise Ratio), MSE (Mean Square Error), RMSE (Root Mean Square Error), NMSE (Normalized Mean Square Error), SAR (Signal-to-Artifact Ratio), SAAF (Stimulus Artifact Attenuation Factor).

Another critical metric for HRI is the computational performance of these models, which directly impacts their feasibility for real-time application.

Table 2: Computational and Operational Performance

Model/Frame work Key Innovation Computational Performance Failure Rate Reduction Service Time Reduction
HICM [9] Hybrid Intelligent Computing Model (Annealing + Tabu Search) Calculation time reduced by 8.67% 7.87% lower 15.09% shorter
Real-time EEG Optimization [2] Lightweight artifact removal & feature smoothing Suitable for real-time constraints - -

Detailed Experimental Protocols

Protocol for AnEEG (LSTM-GAN) for General EEG Artifact Removal

This protocol describes the procedure for implementing the AnEEG model, which uses a Long Short-Term Memory network integrated with a Generative Adversarial Network (LSTM-GAN) for effective artifact suppression [37].

1. Equipment and Software Setup:

  • EEG Acquisition System: A standard multi-channel EEG system with appropriate sampling rate (e.g., 200 Hz or higher).
  • Computing Platform: A computer with a modern GPU (e.g., NVIDIA RTX series) for accelerated deep learning training.
  • Software Environment: Python 3.x with deep learning libraries (TensorFlow/Keras or PyTorch), and scientific computing packages (NumPy, SciPy).

2. Data Preparation and Preprocessing:

  • Data Collection: Obtain EEG datasets containing both artifact-contaminated signals and corresponding ground-truth clean signals, if available. Public datasets like EEG DenoiseNet or MIT-BIH Arrhythmia can be used for validation [37].
  • Signal Preprocessing: Apply band-pass filtering (e.g., 1-50 Hz) to remove extreme frequency noise and a notch filter (e.g., 50/60 Hz) to suppress line interference.
  • Data Segmentation: Partition the continuous EEG data into shorter, fixed-length epochs (e.g., 1-second segments).
  • Data Normalization: Normalize each epoch to have zero mean and unit variance to stabilize the training process.

3. Model Architecture and Training:

  • Generator Network (LSTM-based):
    • Input: A sequence of artifact-contaminated EEG samples.
    • Architecture: A stack of two LSTM layers, each with 50 hidden units, followed by a fully connected output layer.
    • Output: A generated "clean" EEG signal.
  • Discriminator Network (CNN-based):
    • Input: Either a real clean EEG signal or a generated signal from the Generator.
    • Architecture: A one-dimensional convolutional neural network (1D-CNN) with four layers and a sigmoid activation output unit.
    • Output: A probability score (0 to 1) indicating whether the input signal is real or generated.
  • Training Loop:
    • Adversarial Training: Train the Generator and Discriminator alternately.
    • Generator Loss: A composite loss function that includes both adversarial loss (to fool the Discriminator) and a mean-squared-error term (to ensure similarity with ground-truth clean signals).
    • Discriminator Loss: Binary cross-entropy loss to correctly classify real vs. generated signals.
    • Hyperparameters: Use Adam optimizer with a learning rate of 0.001, batch size of 64, and train for a sufficient number of epochs (e.g., 1000) until convergence.

4. Validation and Quantitative Analysis:

  • Model Inference: Feed new, unseen artifact-contaminated EEG epochs through the trained Generator to produce cleaned outputs.
  • Performance Evaluation: Calculate quantitative metrics including NMSE, RMSE, Correlation Coefficient (CC), SNR, and SAR by comparing the cleaned signal with the ground-truth clean signal [37].
Protocol for Hybrid CNN-LSTM with EMG Reference for Muscle Artifact Removal

This protocol is specifically designed for removing muscle artifacts (EMG) from EEG signals in HRI contexts, such as those involving SSVEP, by leveraging additional EMG recordings [38].

1. Equipment and Software Setup:

  • Multi-modal Data Acquisition: Synchronized EEG and EMG recording systems.
    • EEG Cap: Standard EEG cap with electrodes placed according to the international 10-20 system.
    • EMG Electrodes: Surface EMG electrodes placed on relevant facial and neck muscles (e.g., masseter, temporalis).
  • Stimulation Setup: A visual stimulus unit capable of presenting SSVEP stimuli (e.g., a flickering LED screen).
  • Software: Same as in Protocol 3.1.

2. Data Collection and Preprocessing:

  • Experimental Paradigm:
    • Participants are presented with SSVEP stimuli (e.g., a visual stimulus flickering at 15 Hz).
    • Simultaneously, participants perform strong jaw clenching to induce muscle artifacts.
    • Record synchronized EEG and EMG data throughout the session.
  • Data Synchronization: Precisely align EEG and EMG signals temporally.
  • Data Augmentation: Generate an augmented training dataset by artificially adding recorded EMG artifacts to clean EEG segments. This is crucial for creating a robust model [38].

3. Model Architecture and Training:

  • Hybrid CNN-LSTM Model:
    • Input: A combined data stream of multi-channel EEG signals and one or more channels of concurrent EMG signals.
    • Feature Extraction (CNN): The input is first passed through 1D convolutional layers to extract local temporal and spatial features from the combined signal.
    • Temporal Modeling (LSTM): The features from the CNN are then fed into LSTM layers to model long-range dependencies and temporal dynamics in the data.
    • Output Layer: A fully connected layer maps the LSTM outputs to a cleaned EEG signal with the same dimensions as the input EEG.
  • Training: The model is trained in a supervised manner. The input is the contaminated EEG + EMG reference, and the target output is the corresponding clean EEG (which can be obtained from periods without artifact or via simulation during augmentation). The loss function is typically Mean Squared Error (MSE).

4. Validation and SSVEP-Specific Evaluation:

  • Signal Quality Assessment: Visually inspect the cleaned EEG in both time and frequency domains.
  • SSVEP Preservation Metric: Calculate the Signal-to-Noise Ratio (SNR) in the frequency domain at the stimulation frequency (e.g., 15 Hz) before and after artifact removal. A successful cleaning process will show a significant increase in SNR, indicating that the neural response (SSVEP) is preserved while the background noise (artifact) is suppressed [38].
  • Comparative Analysis: Benchmark the performance against classical methods like ICA and linear regression.

Signaling Pathways and Workflow Visualizations

Workflow for Real-Time Artifact Removal in HRI

This diagram illustrates the end-to-end pipeline for processing neural signals to enable robust human-robot interaction, integrating artifact removal as a critical first step.

HRI_Workflow cluster_preprocessing Pre-processing & Artifact Removal start EEG/EMG Data Acquisition A Band-pass & Notch Filtering start->A B Deep Learning Model (e.g., CNN-LSTM, GAN) A->B C Artifact-Cleaned EEG Signal B->C D Feature Extraction (e.g., HOC, Hjorth, Band Power) C->D E State/Intent Classification (Emotion, ErrP, Motor Imagery) D->E F Robot Action/Adaptation E->F G Real-Time HRI Loop F->G

Architecture of a Hybrid CNN-LSTM Model for Artifact Removal

This diagram details the internal structure of a hybrid CNN-LSTM model that uses an additional EMG reference signal to remove muscle artifacts from EEG.

HybridModel Input Multi-channel EEG + EMG Reference Signal CNN 1D Convolutional Layers • Feature Extraction • Local Pattern Detection Input->CNN LSTM LSTM Layers • Temporal Modeling • Long-range Dependency Capture CNN->LSTM Output Cleaned EEG Signal LSTM->Output

The Scientist's Toolkit: Research Reagent Solutions

This section lists key computational tools, algorithms, and data resources that form the essential "reagent solutions" for developing and implementing the discussed artifact removal approaches in HRI research.

Table 3: Essential Research Tools and Resources

Tool/Resource Name Type Primary Function Application Context
AnEEG (LSTM-GAN) [37] Deep Learning Model Removes various artifacts by generating clean EEG signals. General EEG preprocessing for clinical or BCI applications.
Hybrid CNN-LSTM (w/ EMG) [38] Deep Learning Model Specifically targets muscle artifacts using EMG reference. HRI studies involving motion or SSVEP paradigms.
FLM Optimization [40] Hybrid Training Algorithm Firefly + Levenberg-Marquardt optimizes neural network weights. Enhancing adaptive filters for artifact removal.
Mutual Information (Epanechnikov) [7] Blind Source Separation Identifies and removes artifact components based on mutual information. Emotion recognition from EEG; general artifact removal.
DBGS Algorithm [41] Hybrid Hardware-Software Real-time template subtraction for stimulation artifacts. Functional Electrical Stimulation (FES) environments.
HOC & Hjorth Features [7] Feature Extraction Provides stable, discriminative features from cleaned EEG signals. Emotion estimation and pattern recognition post-artifact removal.
ICA & SOBI [2] [7] Classical Algorithms Baseline methods for blind source separation and component rejection. Benchmarking and comparison against new deep learning methods.

This application note provides a detailed protocol for establishing a real-time processing pipeline for artifact removal in human-robot interaction (HRI) research. Such pipelines are critical for developing brain-centered HRI experiences that are intuitive, effective, and capable of adapting to human cognitive and emotional states. The document outlines the complete system architecture, step-by-step experimental procedures, and validation methodologies required to implement a robust pipeline capable of processing electroencephalography (EEG) data under real-time constraints, thereby facilitating advanced affective HRI research.

The evolution of Human-Robot Interaction (HRI) is increasingly focused on creating brain-centered experiences where robots can understand and adapt to human states in real-time [42]. A significant challenge in this domain is that current robots often fail to comprehend personalized intents, attentions, and emotions, which prevents them from serving people appropriately across different contexts [42]. Real-time artifact removal from physiological signals like EEG is a foundational technology to overcome this barrier. It enables robots to dynamically interpret a user's emotional and cognitive state, paving the way for co-adaptive joint actions and more natural communication [42] [2].

Affective HRI requires lightweight software and affordable, wearable devices to become practically viable [2]. The real-time estimation of emotions from EEG data presents a particular optimization challenge, balancing processing speed with high accuracy. Traditional offline, supervised artifact removal methods often involve complex deep learning architectures with extensive hyper-parameter tuning, processes that can take days or weeks and are unsuitable for real-time applications [2]. The pipeline described herein addresses these obstacles by integrating optimized artifact removal and emotion estimation methodologies that function within strict real-time constraints, making it possible to conduct HRI studies that are both ecologically valid and scientifically rigorous.

System Architecture and Workflow

The proposed pipeline is engineered for the continuous processing of EEG data, from acquisition to the final output of a classified emotional state, which can then be used by a robot to modulate its interaction. The entire system must operate with minimal latency to be effective in real-time HRI scenarios.

The logical flow of the real-time processing pipeline, from signal acquisition to robot action, is illustrated below.

G cluster_0 Real-Time Processing Pipeline A EEG Signal Acquisition B Real-Time Artifact Removal A->B C Feature Extraction B->C B->C D Feature Smoothing C->D C->D E Dimensionality Reduction D->E D->E F Emotion State Classification E->F E->F G HRI Command F->G H Robot Action G->H

Data Flow Description

  • EEG Signal Acquisition: The pipeline begins with the continuous collection of raw EEG data from a multi-electrode headset. For emotion estimation, research indicates that a focused set of eight electrodes (AF3, T7, TP7, P7, AF4, T8, TP8, P8) covering temporal and prefrontal areas is often sufficient and optimal for real-time processing [2].
  • Real-Time Processing Core: This series of steps transforms the raw signal into a classified emotional state.
    • Artifact Removal: Corrupting signals, such as those from eye blinks (Electro-oculographic artifacts) and muscle movement, are identified and removed.
    • Feature Extraction: Stable and meaningful features are calculated from the cleaned signal in specific frequency bands (e.g., Beta: 16-30 Hz, Gamma: 30-50 Hz) [2].
    • Feature Smoothing: Variability over time is reduced using techniques like Linear Dynamic Systems (LDS) or Moving Average to improve signal stability [2].
    • Dimensionality Reduction: A filter-based feature selection method is recommended to choose the most relevant features without the computational overhead of wrapper or embedded methods, thus meeting real-time constraints [2].
    • Emotion State Classification: A pre-trained classifier maps the processed features to a discrete emotional state (e.g., positive, negative, neutral).
  • HRI Command & Robot Action: The classified emotional state is translated into a behavioral command for the robot, enabling an adaptive interaction.

Experimental Protocols and Methodologies

Protocol: Real-Time Electro-oculographic (EOG) Artifact Removal

Objective: To remove ocular artifacts from continuous EEG data with minimal loss of information and processing time.

Materials:

  • Raw EEG data stream (e.g., from a headset like OpenBCI [42]).
  • Processing software (e.g., Python with MNE, SciPy).

Methodology:

  • Initial Filtering: Apply a bandpass filter (1–50 Hz) to the raw signal. This range preserves the relevant frequency bands for emotion estimation (Delta, Theta, Alpha, Beta, Gamma) while partially attenuating high-frequency muscle artifacts [2].
  • Notch Filtering: Apply a 50 Hz (or 60 Hz, depending on regional power systems) notch filter to eliminate background power line noise [2].
  • EOG Artifact Removal: Implement an automatic EOG removal technique. Two methodologies suitable for real-time application are:
    • Independent Component Analysis (ICA) with Wavelet Analysis: ICA decomposes the signal into independent components. Components identified as artifacts (based on their topography and time-course) can be removed, and the signal can be reconstructed without them. Wavelet analysis can further refine this process [2].
    • Regression-Based Methods: These methods estimate and subtract the EOG contribution from the EEG signal.

Validation:

  • Quantitative: Compare the processing time and the signal-to-noise ratio (SNR) before and after artifact removal against ground-truth, clean data.
  • Qualitative: Visually inspect the signal pre- and post-processing to ensure meaningful brain activity is preserved.

Protocol: Emotion Estimation for Affective HRI

Objective: To accurately classify a subject's emotional state from EEG features in real-time.

Materials:

  • Artifact-free EEG data stream.
  • A validated emotional model (e.g., the discrete model with positive, negative, and neutral states) [2].
  • A pre-trained classification model.

Methodology:

  • Feature Extraction: From the cleaned EEG signal, extract features from the Beta (16-30 Hz) and Gamma (30-50 Hz) bands, as these have proven most effective for emotion estimation [2]. Calculate features in the frequency domain (e.g., Power Spectral Density).
  • Feature Smoothing: Apply a smoothing filter (e.g., LDS or moving average) to the extracted features to reduce session-to-session and subject-to-subject variability [2].
  • Scaling: Scale the features using a robust scaler that is less sensitive to outliers than min-max or standardizing methods.
  • Classification: Use a lightweight machine learning classifier (e.g., Linear SVM, Logistic Regression) that was pre-trained on a benchmark database like SEED. Avoid complex deep learning models that require extensive hyper-parameter tuning and computation [2].

Validation:

  • Evaluate performance using accuracy under both subject-dependent and subject-independent paradigms on a known database [2].
  • In a live HRI experiment, validate outcomes through user self-reporting and behavioral analysis of the interaction.

Quantitative Data and Analysis

The performance of different components in the pipeline can be evaluated using the following quantitative metrics.

Table 1: Comparison of Real-Time Artifact Removal Techniques

Technique Primary Use Key Parameters Reported Processing Time Advantages
ICA with Wavelet Analysis [2] EOG & Muscle Artifact Removal Components to reject, Wavelet family Optimized for real-time High accuracy in artifact separation
Regression-Based Methods EOG Artifact Removal Regression coefficients Fast Computationally lightweight
Notch Filter [2] 50/60 Hz Powerline Noise Frequency (50/60 Hz), Q-factor Negligible Highly effective for target noise
Bandpass Filter (1-50 Hz) [2] Broadband Noise & Muscle Artifact Low-cut (1Hz), High-cut (50Hz) Negligible Preserves key frequency bands for HRI

Table 2: Emotion Classification Performance on SEED Database (Example)

Classification Approach Subject-Dependent Accuracy Subject-Independent Accuracy Suitable for Real-Time? Notes
Proposed Methodology (e.g., with LDS smoothing) [2] High (>90% achievable) Maintains high performance Yes Balanced accuracy and speed
Deep Learning Models (e.g., CNNs) [2] Very High Can be high with large data Often No High computational cost, slow tuning
SVM with Raw Features Moderate Lower Yes Lower accuracy, especially cross-subject

The workflow for the emotion estimation protocol, from pre-processing to classification, is detailed below.

G cluster_1 Emotion Estimation Protocol A1 Artifact-Free EEG Input B1 Bandpower Extraction (Beta, Gamma bands) A1->B1 C1 Feature Smoothing (LDS / Moving Average) B1->C1 B1->C1 D1 Robust Feature Scaling C1->D1 C1->D1 E1 Dimensionality Reduction (Filter-Based Method) D1->E1 D1->E1 F1 Pre-Trained Classifier (e.g., Linear SVM) E1->F1 E1->F1 G1 Emotional State Output (Pos, Neg, Neutral) F1->G1

The Scientist's Toolkit: Research Reagent Solutions

This section lists the essential hardware, software, and algorithmic "reagents" required to implement the real-time HRI pipeline.

Table 3: Essential Materials and Tools for the HRI Pipeline

Item Name / Category Specification / Example Primary Function in the Pipeline
Wearable EEG Headset OpenBCI [42] Acquires raw brainwave data from the user in a portable format suitable for HRI settings.
Real-Time Data Bus Apache Kafka [43] Handles the continuous, low-latency streaming of EEG data and messages between pipeline components.
Artifact Removal Algorithm Real-Time ICA [2] Identifies and removes ocular and muscle artifacts from the EEG stream to clean the data.
Feature Set Beta & Gamma Bandpower [2] Serves as the input variables for the classifier, capturing the neural correlates of emotion.
Feature Smoothing Algorithm Linear Dynamic Systems (LDS) [2] Reduces temporal variability in features, improving the stability and accuracy of emotion estimation.
Classification Model Linear SVM [2] A lightweight, fast model that maps smoothed EEG features to a discrete emotional state.
Robotics Middleware Robot Operating System (ROS) [44] Provides the software framework to integrate the emotion classification output with robot control logic.

Implementation Considerations for HRI Research

Implementing this pipeline in live HRI studies requires careful consideration of several factors. The entire system must be designed for low latency to ensure the robot's response is perceived as timely and natural by the human user [2]. Furthermore, the choice between subject-dependent and subject-independent models is crucial. While subject-dependent models can offer higher accuracy, subject-independent models are more practical for applications with new users [2]. Finally, a robust monitoring framework should be established to track system health metrics such as latency, error frequency, and data saturation in real-time, ensuring the pipeline's reliability during extended experimental sessions [43].

From Lab to Real World: Solving Practical Implementation Hurdles

Optimizing Computational Efficiency for Real-Time Constraints

Affective human-robot interaction (HRI) requires software that is not only accurate but also computationally efficient to function under real-time constraints. A significant challenge in this domain is processing electroencephalography (EEG) signals, which involves critical steps like artifact removal and emotion estimation without introducing disruptive delays [3]. The optimization of these processes is paramount for creating fluid and natural interactions between humans and robots, particularly when using wearable EEG devices in real-world, uncontrolled environments [31]. This document outlines application notes and protocols for achieving such optimization, focusing on methodologies that balance high accuracy with minimal processing time.

Core Computational Challenges in Real-Time EEG Processing

Processing EEG signals for real-time HRI presents two primary, interconnected challenges that must be addressed to ensure system efficacy and user acceptance.

  • Online Artifact Removal: The first major obstacle is the online removal of artifacts—unwanted signals that are not of cerebral origin. The most common artifacts are electro-oculographic (EOG) from eye movements and blinks, muscular artifacts, and environmental noise like 50/60 Hz power line interference [3]. Traditional, offline artifact removal often involves manual inspection and component rejection, which is computationally expensive and time-consuming. Real-time systems require automated, fast, and reliable methods that remove these artifacts without sacrificing valuable neural information [3] [31].
  • Rapid and Accurate Classification: The second challenge is performing feature extraction, smoothing, and pattern classification with high accuracy across sessions and subjects, all within a strict time budget [3]. Complex deep learning models, while powerful, often require extensive hyper-parameter tuning and computational resources, making them less suitable for real-time applications where quick adaptation is key [3]. Furthermore, the inherent variability of EEG signals across individuals and recording sessions complicates the creation of robust models.

Optimized Methodologies and Protocols

Real-Time Artifact Removal Techniques

Effective artifact removal is a prerequisite for reliable EEG analysis. The following protocols detail optimized approaches for different artifact types.

Protocol 3.1.1: Real-Time Electro-oculographic (EOG) Artifact Removal

  • Objective: To remove eye blink and movement artifacts from continuous EEG data with minimal information loss and processing time.
  • Methodology: Independent Component Analysis (ICA) combined with wavelet analysis [3].
    • Decomposition: Apply a real-time compatible ICA algorithm (e.g., FastICA) to decompose the multi-channel EEG signal into independent components (ICs).
    • Identification: Automatically identify components correlated with EOG artifacts. This can be based on topographic patterns (frontopolar focus) and temporal characteristics (high-amplitude, low-frequency spikes).
    • Correction: Instead of complete rejection, apply a wavelet-based correction to the identified artifact components to preserve underlying neural activity.
    • Reconstruction: Reconstruct the EEG signal without the artifact-dominated components.
  • Rationale: ICA is effective at separating neural and ocular sources. Wavelet analysis provides a multi-resolution framework for precise local correction, reducing the overall loss of information compared to whole-component rejection [3].

Protocol 3.1.2: Filtering for Muscular and Environmental Artifacts

  • Objective: To attenuate high-frequency muscle noise and line noise.
  • Methodology:
    • Bandpass Filtering: Apply a 1–50 Hz bandpass filter. This range encompasses the standard frequency bands for emotion estimation (Delta: 1–4 Hz, Theta: 4–8 Hz, Alpha: 8–16 Hz, Beta: 16–30 Hz, Gamma: 30–50 Hz) while suppressing high-frequency muscle artifacts [3].
    • Notch Filtering: Apply a notch filter (e.g., 50 Hz or 60 Hz, depending on region) to remove power line interference. Infinite Impulse Response (IIR) filters are often chosen for their computational efficiency [3].
  • Rationale: This pre-processing step is computationally lightweight and effectively reduces a significant portion of non-ocular artifacts before more sophisticated algorithms are applied.

The following workflow integrates these techniques into a cohesive real-time processing pipeline.

G RawEEG Raw EEG Signal PreFilter Pre-Filtering RawEEG->PreFilter Notch 50/60 Hz Notch Filter PreFilter->Notch Bandpass 1-50 Hz Bandpass Filter PreFilter->Bandpass ICA ICA Decomposition Notch->ICA Bandpass->ICA ComponentID Component Identification ICA->ComponentID WaveletCorrection Wavelet-Based Correction ComponentID->WaveletCorrection Reconstruction Signal Reconstruction WaveletCorrection->Reconstruction CleanEEG Artifact-Reduced EEG Reconstruction->CleanEEG

Efficient Emotion and Error Potential Estimation

Once artifacts are mitigated, the cleaned signal can be used for state estimation. The protocols below focus on optimizing feature extraction and classification.

Protocol 3.2.1: Optimized Emotion Estimation Workflow

  • Objective: To accurately classify emotional states from EEG signals under real-time constraints.
  • Methodology:
    • Electrode Selection: Restrict analysis to a focused set of electrodes. Research indicates that six temporal and two prefrontal electrodes (AF3, T7, TP7, P7, AF4, T8, TP8, P8) are often sufficient for emotion estimation, reducing data dimensionality from the outset [3].
    • Feature Extraction: Compute a stable set of features from the frequency domain. Differential Entropy (DE) from the Beta (16–30 Hz) and Gamma (30–50 Hz) bands has proven highly effective [3].
    • Feature Smoothing: Apply a smoothing technique like Linear Dynamic Systems (LDS) or a simple moving average to reduce trial-to-trial variability and improve signal-to-noise ratio [3].
    • Classification: Employ a computationally efficient classifier. While deep learning is powerful, for real-time applications, a well-tuned Support Vector Machine (SVM) or Linear Discriminant Analysis (LDA) can provide high accuracy with significantly lower computational overhead [3].
  • Rationale: This pipeline optimizes each step for speed and stability, from a minimal electrode setup to the use of simple yet powerful classifiers, enabling real-time performance.

Protocol 3.2.2: Feature-Based Error-Related Potential (ErrP) Detection

  • Objective: To reliably detect ErrPs in EEG signals for implicit HRI feedback across different subjects and tasks.
  • Methodology:
    • Feature Characterization: Extract a wide set of features from the EEG time series that characterize the ErrP waveform, including temporal, spectral, and spatial properties.
    • Cross-Subject Classification: Train a subject-independent classifier (e.g., LDA or SVM) on the extracted feature set. This avoids the need for per-user calibration, which is a bottleneck for real-world deployment [26].
    • Handling Class Imbalance: Address the natural imbalance between ErrP (error) and non-ErrP (correct) trials. Simple oversampling of the minority class (ErrP) can be used, with performance evaluated using the F1 score to ensure a balance between precision and recall [26].
  • Rationale: A feature-based approach that generalizes across subjects is crucial for scalable HRI. Focusing on the F1 score ensures that the system is optimized for the practical trade-off between missing errors and issuing false corrections [26].
Performance Evaluation Metrics

To objectively evaluate the computational efficiency and accuracy of the optimized system, the following metrics should be used.

Table 1: Key Performance Metrics for Real-Time EEG Processing

Metric Formula Target for Real-Time HRI
Artifact Removal Processing Time N/A < 100 ms per epoch [3]
Classification Accuracy (TP + TN) / (TP + TN + FP + FN) > 85% (subject-dependent) [3]
Recall (Sensitivity) TP / (TP + FN) Maximize (Critical for ErrP detection) [26]
F1 Score 2 × (Precision × Recall) / (Precision + Recall) > 0.7 (for imbalanced ErrP datasets) [26]

Abbreviations: TP (True Positive), TN (True Negative), FP (False Positive), FN (False Negative).

The Scientist's Toolkit: Research Reagent Solutions

Implementing the aforementioned protocols requires a suite of software tools and algorithmic approaches.

Table 2: Essential Research Reagents for Real-Time EEG-HRI

Tool/Algorithm Type Function in the Pipeline
Independent Component Analysis (ICA) Algorithm Blind source separation for isolating ocular and muscular artifacts from neural signals [3] [31].
Wavelet Transform Algorithm Multi-resolution analysis for precise, localized correction of artifacts in specific signal components [3] [31].
Artifact Subspace Reconstruction (ASR) Algorithm/Pipeline Statistical method for identifying and removing high-variance artifact components; suitable for various artifact types [31].
Linear Discriminant Analysis (LDA) Algorithm A lightweight, efficient classifier suitable for subject-independent ErrP detection and emotion classification [26].
SEED Database Dataset A publicly available dataset for benchmarking emotion estimation methodologies using EEG [3].

The strategic application of these tools and protocols is summarized in the following optimization strategy diagram.

G Goal Goal: Real-Time Computational Efficiency Strat1 Dimensionality Reduction Goal->Strat1 Strat2 Algorithm & Model Selection Goal->Strat2 Strat3 Cross-Subject Generalization Goal->Strat3 Sub1a Focused Electrode Montage (e.g., 8 key electrodes) Strat1->Sub1a Sub1b Stable Feature Selection (e.g., Differential Entropy) Strat1->Sub1b Outcome Outcome: Low-Latency, Accurate HRI Sub1a->Outcome Sub1b->Outcome Sub2a Fast Artifact Removal (ICA + Wavelet, ASR) Strat2->Sub2a Sub2b Efficient Classifiers (LDA, SVM over Deep Learning) Strat2->Sub2b Sub2a->Outcome Sub2b->Outcome Sub3a Subject-Independent Models Strat3->Sub3a Sub3b Feature-Based Approaches Strat3->Sub3b Sub3a->Outcome Sub3b->Outcome

In human-robot interaction (HRI) research, real-time analysis of neural signals via electroencephalography (EEG) provides a critical window into human cognitive and affective states. However, full-body movement, essential to naturalistic HRI, introduces motion artifacts that severely compromise EEG signal quality [45] [46]. A central challenge is optimizing artifact removal parameters to maximize noise suppression without removing neural signals of interest—a problem known as "over-cleaning." This application note synthesizes proven parameter-tuning strategies from Automatic Speech Recognition (ASR) and the iCanClean EEG artifact removal algorithm, providing a framework for developing robust real-time artifact removal pipelines in HRI research.

Quantitative Foundations: Performance Data from Key Studies

The following tables consolidate quantitative findings from empirical studies on iCanClean and ASR, providing a basis for informed parameter selection.

Table 1: iCanClean Parameter Sweep Results for Mobile EEG (from Young, Older, and Low-Functioning Older Adults during Walking) [47]

Window Length R² Threshold (Aggressiveness) Average Number of "Good" ICA Components Performance Change vs. Baseline
Not Applied (Baseline) Not Applied 8.4 Baseline
4 seconds 0.65 13.2 +57%
4 seconds 0.60 ~12.7 +51%
4 seconds 0.70 ~12.2 +45%
2 seconds 0.65 ~12.0 +43%
1 second 0.65 ~11.5 +37%
Infinite 0.65 ~10.5 +25%

Table 2: Comparative Performance of Artifact Removal Methods on a Phantom Head Model (Data Quality Score %) [34]

Artifact Condition Uncleaned iCanClean ASR Auto-CCA Adaptive Filtering
Brain (Target) 57.2% - - - -
Brain + All Artifacts 15.7% 55.9% 27.6% 27.2% 32.9%
Brain + Walking Motion 21.5% 56.5% 30.1% 28.8% 35.2%

Table 3: Artifact Subspace Reconstruction (ASR) Parameter Impact [46]

ASR Parameter (k) Cleaning Aggressiveness Impact on ICA Decomposition Recommended Use Case
k = 10 Very High Risk of significant "over-cleaning" and signal loss Not recommended for locomotion
k = 20 High Moderate risk of over-cleaning; use with caution Stationary tasks with large artifacts
k = 30 Moderate Balance between cleaning and preservation General use [46]
k = 50+ Low Limited motion artifact removal Minimal artifact scenarios

Experimental Protocols for Validation

Protocol: Validating Parameters for Mobile EEG with iCanClean

Objective: To empirically determine the optimal iCanClean window length and R² threshold for a specific HRI experimental setup involving human movement [47].

Materials: Dual-layer EEG cap with 120 scalp electrodes and 120 noise electrodes; amplification system; computing setup with iCanClean software.

Procedure:

  • Data Collection: Record EEG data from participants (e.g., n=15 per group) during a task mimicking HRI movements, such as walking on a treadmill at varying speeds and on uneven terrain.
  • Basic Preprocessing: High-pass filter the data at 1 Hz. Perform average re-referencing for scalp and noise channels separately. Reject outlier channels with amplitudes >3 times the median.
  • Parameter Sweep: Process the preprocessed data through iCanClean, systematically varying two parameters:
    • Window Length: Test 1s, 2s, and 4s windows, plus an "infinite" window using the entire recording.
    • R² Threshold: Test values from 0.05 to 1.00 in increments of 0.05.
  • ICA Decomposition: For each parameter combination, perform Independent Component Analysis (ICA) using the AMICA algorithm.
  • Component Quality Assessment: Classify ICA components as 'good' based on two criteria:
    • Dipole Fit: Residual variance (RV) of the component's topographic map must be < 15%.
    • Brain Probability: The component's "brain" label probability from ICLabel must be > 50%.
  • Optimal Parameter Selection: Identify the parameter pair (e.g., 4s window, R²=0.65) that yields the highest number of 'good' brain components without inducing spectral distortions or attenuating expected neural responses (e.g., P300 event-related potentials).

Protocol: Comparing iCanClean and ASR for Real-Time HRI Applications

Objective: To compare the efficacy of iCanClean and ASR in recovering event-related potentials (ERPs) during a dynamic HRI task, such as a Flanker task performed while jogging [46].

Materials: Mobile EEG system; setup for a dynamic cognitive task (e.g., visual stimuli presented during robot interaction).

Procedure:

  • Task Design: Implement a Flanker task under two conditions:
    • Dynamic Condition: Participants perform the task while jogging on a treadmill.
    • Static Condition: Participants perform the identical task while standing still (providing a low-artifact baseline).
  • Data Acquisition: Record EEG data across both conditions.
  • Parallel Processing: Clean the dynamic condition data using two separate pipelines:
    • Pipeline A (iCanClean): Apply iCanClean with parameters set to the previously validated optimum (e.g., 4s window, R²=0.65).
    • Pipeline B (ASR): Apply ASR with a recommended k parameter of 20-30.
  • Evaluation Metrics: Compare the outputs of both pipelines against the static baseline using:
    • ICA Quality: Count of dipolar, brain-like independent components.
    • Spectral Power: Reduction in power at the gait frequency and its harmonics.
    • ERP Fidelity: Presence, latency, and amplitude of expected ERP components (e.g., the P300 congruency effect).
  • Conclusion: Determine which pipeline and parameters best recover the neural signatures observed in the static condition, thereby validating their use for real-time analysis in dynamic HRI.

Signaling Pathways and Workflows

G cluster_core iCanClean Core Algorithm cluster_output Output & Downstream Analysis RawEEG Raw EEG Signal (Mixed Brain + Noise) CCA Canonical Correlation Analysis (CCA) RawEEG->CCA NoiseRef Reference Noise Signals (Dual-layer or Pseudo-reference) NoiseRef->CCA SubspaceID Identify Correlated Noise Subspaces CCA->SubspaceID NoiseRemove Remove Components Exceeding R² Threshold SubspaceID->NoiseRemove CleanEEG Cleaned EEG Signal NoiseRemove->CleanEEG ParamR2 R² Threshold (Cleaning Aggressiveness) ParamR2->NoiseRemove ParamWin Window Length (e.g., 4 seconds) ParamWin->CCA ICA ICA Decomposition CleanEEG->ICA GoodComp High-Quality Brain Components ICA->GoodComp

Diagram 1: The iCanClean workflow and parameter influence.

G OverCleaning Over-Cleaning SignalLoss Neural Signal Loss OverCleaning->SignalLoss ERPLoss Attenuated ERP Components (e.g., P300) OverCleaning->ERPLoss PoorICA Reduced 'Good' ICA Components OverCleaning->PoorICA Invalid Invalid Neurophysiological Conclusions OverCleaning->Invalid ParamR2 R² Too Low ParamR2->OverCleaning ParamK ASR k Too Low ParamK->OverCleaning ParamWin Window Too Short ParamWin->OverCleaning

Diagram 2: The consequences of over-cleaning and their causes.

The Scientist's Toolkit: Key Research Reagents and Materials

Table 4: Essential Materials and Tools for Real-Time Artifact Removal Research

Item Name Function/Description Application in HRI Context
Dual-Layer EEG Cap A specialized cap with scalp electrodes and mechanically coupled, outward-facing noise electrodes. The noise electrodes record artifact signals without brain activity [47] [34]. Provides ideal reference noise signals for iCanClean during dynamic HRI tasks involving walking, reaching, or head movement.
iCanClean Software An algorithm that uses Canonical Correlation Analysis (CCA) and reference noise signals to detect and remove noise subspaces from EEG data [47] [34]. The core tool for real-time, all-in-one artifact removal without requiring clean calibration data.
Artifact Subspace Reconstruction (ASR) A real-time-capable algorithm in EEGLAB/BCILAB that uses principal component analysis (PCA) to remove high-variance artifacts based on a clean calibration period [46]. An alternative cleaning method when dual-layer caps are unavailable; requires careful tuning of the k parameter.
ICLabel EEGLAB Plugin A trained classifier that automatically labels Independent Components (ICs) from ICA based on source type (e.g., brain, muscle, eye, heart) [47] [46]. Critical for the quantitative assessment of cleaning quality by counting brain-derived components post-ICA.
High-Density EEG System (100+ channels) An EEG recording system with a sufficient number of electrodes to enable high-quality ICA decomposition and source localization [47]. Ensures adequate spatial sampling for unmixing neural and artifactual sources in complex HRI environments.

Strategies for Low-Density, Wearable EEG Systems

The emergence of low-density, wearable electroencephalography (EEG) systems represents a paradigm shift in neurophysiological monitoring, offering unprecedented opportunities for real-world brain activity assessment. These portable, affordable devices are increasingly overcoming the limitations of traditional high-density EEG labs, which are characterized by high operational costs, limited accessibility, and artificial recording environments [48]. For human-robot interaction (HRI) research, wearable EEG systems enable the investigation of brain dynamics in naturalistic settings, providing crucial insights into emotional and cognitive states during interactive tasks. However, the transition to low-density systems presents significant challenges, particularly regarding signal quality and artifact contamination in real-time applications [3] [2]. This application note outlines comprehensive strategies for implementing low-density, wearable EEG systems, with emphasis on experimental protocols, artifact removal techniques, and validation methods specifically tailored for HRI research contexts.

Technical Foundations of Modern Brain Wearables

Modern brain wearables leverage several advanced technologies that enable reliable brain monitoring outside traditional clinical settings. Understanding these core technologies is essential for selecting appropriate systems and optimizing their implementation in HRI research.

Dry Electrode EEG Systems: Conventional EEG systems require skin abrasion, conductive gel application, and trained technicians—processes that are time-consuming and uncomfortable for patients. Dry electrode technology eliminates these requirements, making it suitable for home-based monitoring and real-time HRI applications. QUASAR's dry electrode EEG sensors feature ultra-high impedance amplifiers (>47 GOhms) that handle contact impedances up to 1-2 MOhms, producing signal quality comparable to wet electrodes. This technology enables recordings through hair without skin preparation, while patented mechanical isolation designs stabilize electrodes for artifact-free recordings even during movement [48].

Ear-EEG Systems: Ear-EEG represents a significant breakthrough for long-term monitoring applications, allowing discreet, comfortable brain monitoring. These systems capture EEG signals from within the ear canal using either dry or wet electrodes. The Naox device employs dry-contact electrodes with active electrode technology featuring 13 TΩ input impedance to minimize noise despite higher electrode-skin impedance (approximately 300 kΩ). Recent innovations include user-generic earpieces with dry electrodes that eliminate hydrogels while maintaining signal quality comparable to wet electrode systems [48].

Multimodal Integration: Beyond electrical activity measurement, modern brain wearables increasingly incorporate complementary technologies. Functional near-infrared spectroscopy (fNIRS) measures changes in blood oxygenation and volume in the cortex, providing complementary insights into brain activity patterns. As a non-invasive neuroimaging modality, fNIRS offers several advantages for portable monitoring, including strong agreement with simultaneously acquired fMRI measurements and greater tolerance to noise and movement than EEG [48].

Table 1: Comparison of Wearable EEG Technologies for HRI Research

Technology Spatial Resolution Comfort for Long Sessions Motion Artifact Resistance Setup Time Best Use Cases in HRI
Dry Electrode Headsets Medium-High Medium Medium ~4 minutes Controlled laboratory HRI studies
Ear-EEG Systems Low High High <2 minutes Long-duration naturalistic interaction studies
Multimodal (EEG+fNIRS) Medium (EEG) + Low (fNIRS) Medium Medium-Low 5-10 minutes Complex cognitive state assessment
Adhesive Patch Systems Low Medium-High High ~3 minutes Ambulatory studies with movement

Experimental Protocols for HRI Research

Implementing robust experimental protocols is essential for collecting valid, reproducible EEG data in HRI contexts. The following protocols address the specific challenges of low-density systems in interactive scenarios.

Remote Data Collection Protocol

The HEROIC (Home EEG Recording frOm Interfacing Computer) platform provides an open-source framework for remote EEG data collection during customized neurocognitive tasks. This platform enables participants to independently collect advanced EEG data without expert technician assistance, making it particularly valuable for longitudinal HRI studies [49].

Device Initialization and Setup:

  • Guide participants through visual and written instructions for equipping the selected device
  • Implement real-time feedback on connection status and signal quality for each electrode
  • Include quality control checks before beginning experimental sessions
  • Ensure proper electrode-skin contact through impedance verification

Session Recording:

  • Use configuration files containing instructions for generating stimuli
  • Implement timestamped data collection with stimulus synchronization
  • Employ standardized tasks such as oddball paradigms interlaced with rest periods
  • Ensure precise marker placement for event-related potential (ERP) analysis

Post-Session Completion:

  • Compile timestamped data with stimuli markers into structured files (e.g., CSV format)
  • Include metadata describing session configuration and device parameters
  • Implement automated data integrity checks
  • Secure data transfer and storage procedures [49]
Real-time Emotion Estimation Protocol for HRI

Affective HRI requires lightweight software and wearable devices capable of real-time emotion estimation. The following protocol optimizes this process for low-density systems:

Stimuli Presentation:

  • Use standardized emotion elicitation materials (e.g., video clips, images)
  • Implement the Self-Assessment Manikin for subjective emotion rating
  • Employ Russell's circumplex model for valence and arousal assessment
  • Include appropriate inter-stimulus intervals to prevent habituation

EEG Data Acquisition Parameters:

  • Sampling rate: 250-500 Hz
  • Filter settings: 0.1-35 Hz bandpass filter
  • Electrode selection: Focus on temporal and prefrontal regions (AF3, T7, TP7, P7, AF4, T8, TP8, P8)
  • Reference: Common average reference (CAR) or linked mastoids

Feature Extraction and Processing:

  • Extract features from beta and gamma bands (16-50 Hz), most effective for emotion estimation
  • Apply feature smoothing techniques (Linear Dynamic Systems, moving average, or Savitzky-Golay)
  • Implement dimensional reduction through filter-based, wrapper-based, or embedded methods
  • Use scaling methodologies robust to outliers [3] [2]

G StimulusPresentation Stimulus Presentation EEGAcquisition EEG Data Acquisition StimulusPresentation->EEGAcquisition Preprocessing Signal Preprocessing EEGAcquisition->Preprocessing ArtifactRemoval Artifact Removal Preprocessing->ArtifactRemoval FeatureExtraction Feature Extraction ArtifactRemoval->FeatureExtraction EmotionClassification Emotion Classification FeatureExtraction->EmotionClassification RobotResponse Robot Behavior Adaptation EmotionClassification->RobotResponse

Advanced Artifact Removal Strategies

Artifact contamination represents the most significant challenge for reliable EEG analysis in HRI contexts, particularly with low-density systems. Effective artifact removal is essential for accurate emotion estimation and cognitive state classification.

Real-time Artifact Removal Framework

Electro-oculographic (EOG) Artifact Removal:

  • Implement real-time independent component analysis (ICA) combined with wavelet analysis
  • Apply notch filters (50/60 Hz) for background noise removal
  • Use frequency-based filtering: EOG artifacts typically appear below 5 Hz, while muscle artifacts dominate 20-300 Hz range
  • Optimize processing parameters to balance computational efficiency and information preservation [3] [2]

Mutual Information-Based Approaches: Novel Blind Source Separation algorithms based on Mutual Information (MI) minimization have demonstrated superior artifact removal performance for emotion recognition tasks. These methods utilize:

  • Probability density estimation through Epanechnikov kernel (outperforming Gaussian kernel)
  • Automated artifact identification using Multiple Artifact Rejection Algorithm (MARA)
  • Component classification and subtraction from original EEG signals
  • Significantly improved classification accuracy (up to 80.13% for emotion recognition) [7]

Optimized Processing Pipeline:

  • Pre-whiten signals before applying BSS algorithms
  • Extract independent latent components using MI-based algorithms
  • Automatically identify artifactual components using MARA
  • Reconstruct clean signals after component subtraction
  • Extract HOC and Hjorth features for classification [7]

Table 2: Performance Comparison of Artifact Removal Methods for Emotion Recognition

Method Computational Efficiency Classification Accuracy Information Preservation Ease of Implementation Recommended HRI Context
SOBI High 68.15% Medium High Basic real-time applications with limited processing resources
MI with Gaussian Kernel Medium 78.33% Medium-High Medium Standard laboratory HRI studies with moderate artifact contamination
MI with Epanechnikov Kernel Medium-High 80.13% High Medium Advanced HRI applications requiring high classification accuracy
ICA with Wavelet Analysis Medium 75-82% (task-dependent) High Medium Scenarios with prominent EOG artifacts
Machine Learning for Artifact Management

Modern artifact handling increasingly leverages machine learning approaches that simultaneously address artifact contamination and state classification:

Feature Selection Optimization:

  • Employ filter-based methods for computational efficiency
  • Implement wrapper-based methods for optimal feature subset selection
  • Utilize embedded methods with complexity penalties to reduce overfitting
  • Apply cross-validation schemes appropriate for EEG time series characteristics [3]

Channel Selection Strategies: Research demonstrates that strategic channel selection can maintain classification performance while significantly reducing system complexity:

  • Permutation-based channel selection identifies most informative EEG channels
  • Dream experience classification maintains accuracy with 30-40 channels compared to high-density systems
  • Occipital channel removal can improve performance for certain classification tasks
  • Focus on prefrontal and temporal regions for emotion estimation [50]

Validation and Performance Metrics

Rigorous validation is essential for establishing the reliability of low-density EEG systems in HRI applications. The following approaches provide comprehensive performance assessment:

Signal Quality Validation

Event-Related Potential (ERP) Validation:

  • Compare characteristic ERP components (P300, N200) between wearable and clinical-grade systems
  • Assess amplitude and latency correlations across devices
  • Evaluate waveform morphology consistency
  • Muse 2 wearables demonstrate nearly identical P300 and N200 waveforms compared to clinical Brain Vision systems [51]

Simultaneous Recording Protocols:

  • Collect data simultaneously from wearable and research-grade systems
  • Calculate time-domain correlations between neighboring electrodes
  • Assess quantitative spike measurements for epilepsy applications
  • Evaluate resting-state activity correspondence [51]

Sleep Staging Validation:

  • Compare wearable EEG against polysomnography (PSG)
  • Calculate Cohen's kappa coefficients for sleep stage agreement
  • Evaluate hypnogram concordance
  • Dreem headband demonstrates strong agreement with PSG for automatic sleep staging [48] [51]
Clinical and Behavioral Correlations

Establishing correlation between EEG metrics and behavioral measures strengthens the validity of low-density systems for HRI research:

Cognitive State Correlations:

  • Significant correlations found between perceived cognitive fatigue and combination of EEG/ERP-derived features
  • Frontal theta increases during quiz tasks, parietal alpha suppression during lectures
  • High-beta enhancements in later stages of learning tasks
  • Machine learning models achieving 83% accuracy for learning stage discrimination [51] [52]

Clinical Application Validation:

  • Low-density EEG functional connectivity discriminates Minimally Conscious State plus from minus with 79% accuracy
  • Graph-theoretical features reveal neurophysiological differences in consciousness disorders
  • Specific patterns: lower brain integration in α band for MCS-, higher clustering in δ band for MCS- [53]

G cluster_0 Validation Approaches HDEEG High-Density EEG (Reference Standard) Simultaneous Simultaneous Recording Protocol HDEEG->Simultaneous ERP ERP Component Comparison HDEEG->ERP LDEEG Low-Density Wearable (Test System) LDEEG->Simultaneous LDEEG->ERP Behavioral Behavioral Correlation Analysis LDEEG->Behavioral ML Machine Learning Classification LDEEG->ML Validation Validation Metrics Simultaneous->Validation ERP->Validation Behavioral->Validation ML->Validation

Implementation Challenges and Solutions

Despite significant advances, several challenges remain in the widespread implementation of low-density wearable EEG systems for HRI research:

Data Quality and Signal-to-Noise Ratio: Low-density systems face inherent signal-to-noise ratio limitations compared to high-density clinical systems. Mitigation strategies include:

  • Advanced artifact removal algorithms optimized for specific contamination types
  • Multimodal integration with complementary signals (fNIRS, PPG)
  • Task designs that minimize movement artifacts
  • Environmental controls for remote data collection [51]

Inter-Subject and Cross-Session Variability: EEG signals exhibit substantial variability across individuals and recording sessions, particularly problematic for real-time HRI applications:

  • Implement subject-independent paradigms through transfer learning
  • Apply feature smoothing techniques to reduce session variability
  • Utilize adaptive calibration procedures
  • Develop robust normalization methods [3] [54]

Technical and Computational Constraints: Wearable systems balance performance with practical constraints:

  • Optimize algorithms for real-time processing on limited hardware
  • Balance electrode density with comfort and usability
  • Manage power consumption for extended recording sessions
  • Ensure wireless stability during interactive tasks [48] [49]

The Scientist's Toolkit: Research Reagent Solutions

Table 3: Essential Research Tools for Low-Density Wearable EEG Studies

Tool/Platform Type Primary Function Key Features Compatibility
HEROIC Open-source software platform Remote EEG data collection Stimulus synchronization, quality control, multiple device support Muse, Emotiv, OpenBCI
Muse 2 Consumer-grade EEG headset Affordable EEG acquisition 4 electrodes, ~$250 cost, validated for ERP collection HEROIC, Mind Monitor, Muse Direct
Emotiv EPOC X Research-grade wearable EEG High-quality mobile acquisition 14 channels, saline electrodes, research validation Emotiv Pro, custom MATLAB
cEEGrid Ear-EEG system Discreet, long-term monitoring Around-ear electrode array, minimal setup OpenBCI, custom amplifiers
Autoreject Python library Automated EEG preprocessing Bad channel detection, epoch rejection, interpolation MNE-Python compatible
MARA Automated artifact removal Component classification Machine learning-based ICA component rejection EEGLAB, Python

Low-density, wearable EEG systems represent a transformative technology for human-robot interaction research, enabling the investigation of brain dynamics in naturalistic environments and real-time adaptive interactions. The strategies outlined in this application note—covering technical implementation, experimental protocols, advanced artifact removal, and validation frameworks—provide researchers with comprehensive guidelines for successful system deployment. While challenges remain in signal quality, computational efficiency, and cross-subject reliability, ongoing advances in dry electrode technology, machine learning approaches, and open-source software platforms continue to enhance the capabilities of these systems. By implementing the detailed protocols and methodologies presented here, HRI researchers can leverage wearable EEG technology to gain unprecedented insights into human cognitive and emotional states during interactive tasks, ultimately advancing the development of more responsive, adaptive robotic systems.

Within human-robot interaction (HRI) research, the accurate measurement of human state and movement is paramount for enabling seamless and safe collaboration. Inertial Measurement Units (IMUs) are crucial auxiliary sensors for this purpose, providing data on human motion and physical interaction. However, their signals are susceptible to various noise sources and artifacts that can degrade performance in real-time systems. This application note details the role of IMUs and the principles of noise reference utilization, providing structured protocols and data to enhance artifact removal in HRI research.

Sensor Fundamentals and Noise Characteristics

Inertial Measurement Unit (IMU) Fundamentals

An Inertial Measurement Unit (IMU) is an electromechanical or solid-state device that measures the specific force, angular rate, and, sometimes, the magnetic field surrounding an object. Typically, it contains a suite of sensors orthogonally mounted to provide measurements along three axes [55]:

  • Accelerometers: Measure linear acceleration (rate of change of velocity).
  • Gyroscopes: Measure rotational rate or angular velocity.
  • Magnetometers (often included): Measure magnetic field strength, used for heading reference.

The proliferation of Micro-Electromechanical Systems (MEMS) technology has made IMUs extremely small, lightweight, low-power, and cost-effective, facilitating their integration into wearable HRI systems [55]. MEMS accelerometers operate on the principle of a sprung proof mass; acceleration causes displacement of this mass, which is measured via changes in electrical capacitance and converted into an acceleration value [55].

IMU data is corrupted by various error sources, which can be characterized as deterministic (bias, scale factor errors) or stochastic (noise). Key noise-related concepts include:

  • Bias Instability: A measure of the inherent stability of the sensor, representing the lowest noise level achievable, typically expressed in °/hr for gyroscopes [55].
  • Angle Random Walk / Velocity Random Walk: Represents the error growth due to white noise in the sensor output, affecting the accuracy of dead reckoning over time.
  • Artifacts: In HRI contexts, artifacts refer to unwanted signal components not originating from the motion of interest. These can be caused by [56] [7]:
    • Sensor-based artifacts: Electronic noise, temperature effects, calibration errors.
    • Human-motion artifacts: Voluntary movements not part of the intended HRI task, tremors, or gait vibrations.
    • Environmental artifacts: Magnetic disturbances for magnetometers.

Table 1: Key Performance Metrics for MEMS and FOG IMUs (adapted from [55])

Performance Metric MEMS IMU (e.g., Motus) Fiber-Optic Gyro (FOG) IMU (e.g., Boreas D90) Unit
Roll/Pitch Accuracy 0.05 0.005 °
Heading Accuracy 0.8 (magnetic) 0.01 °
Gyro Bias Instability 0.2 0.001 °/hr
Weight ~26 ~2500 g
Power Consumption ~1.4 ~12 W

Denoising Methods and Quantitative Performance

Effective artifact removal is critical for leveraging IMU data in real-time HRI. Advanced signal processing techniques have demonstrated significant improvements.

Hybrid Noise Removal Using Lifting Wavelet Transform

A prominent method for inertial sensor denoising involves the Lifting Wavelet Transform (LWT). LWT is a popular denoising technique with advantages over classic wavelets in terms of time and computational complexity [56]. One advanced approach optimizes this process using a Genetic Algorithm (GA) for intelligent averaging of different multi-level LWT outputs [56].

This hybrid method (LWT-GA) has shown substantial performance gains in both static and dynamic scenarios relevant to HRI, as summarized in Table 2.

Table 2: Performance Improvement of LWT-GA Denoising Method for Inertial Sensors (data from [56])

Data Type Sensor Reported Improvement Notes
Dynamic Data Gyroscope 83% Compared to raw sensor data.
Dynamic Data Accelerometer 59% Compared to raw sensor data.
Static Data Gyroscope 71% Compared to raw sensor data.
Static Data Accelerometer 36% Compared to raw sensor data.

Artifact Removal in Other Biophysical Signals

The principles of artifact removal extend to other sensors used in HRI. In EEG signal analysis, for example, Blind Source Separation (BSS) algorithms are used to identify and remove artifacts. A Mutual Information (MI)-based BSS algorithm using an Epanechnikov kernel for probability density estimation demonstrated superior performance, achieving 80.13% accuracy in an emotion recognition task, outperforming both Gaussian kernel-based MI and classical SOBI algorithms [7]. This underscores the importance of selecting advanced, computationally efficient algorithms for real-time artifact removal in multi-modal HRI systems.

Experimental Protocols for HRI Research

Protocol: Sensor Integration and Data Synchronization for HRI

Objective: To integrate an IMU into a multi-modal HRI system and achieve temporal synchronization of sensor data streams for robust artifact analysis.

Materials:

  • IMU (e.g., MEMS-based unit).
  • Primary HRI sensor (e.g., camera, EEG headset, robotic joint encoder).
  • Data acquisition system (e.g., ROS network, LabVIEW, or custom DAQ software).
  • Synchronization trigger (e.g., digital pulse, network time protocol).

Methodology:

  • Physical Mounting: Securely mount the IMU on the relevant human body part (e.g., wrist, forearm) or robot link. Ensure minimal slippage using appropriate straps.
  • System Configuration:
    • Configure the IMU's output data rate and range suitable for the HRI task (e.g., 100 Hz for slow collaborative tasks, >500 Hz for dynamic motion or vibration analysis).
    • Configure the primary sensor (e.g., camera frame rate).
  • Synchronization Procedure:
    • Implement a common hardware trigger to start all data streams simultaneously.
    • Alternatively, implement a software-based synchronization using a precise network time protocol.
    • Perform a validation experiment with a simple, timestamped event observable by all sensors (e.g., a sharp tap on a rigid structure, recorded by the IMU as an impulse and by a camera as a visual event).
  • Data Collection:
    • Record synchronized data from all sensors during HRI tasks.
    • Include periods of "no motion" or known reference motions for baseline calibration and noise assessment.

Protocol: Validation of Denoising Algorithms for IMU Data

Objective: To quantitatively evaluate the performance of a denoising algorithm (e.g., LWT-GA) on IMU data collected in an HRI context.

Materials:

  • Raw, synchronized IMU data from Protocol 4.1.
  • Denoising algorithm (e.g., MATLAB/Python implementation of LWT-GA).
  • Ground truth reference (e.g., high-precision optical motion capture system, or known trajectory from robot encoders).

Methodology:

  • Data Preprocessing:
    • Segment the collected IMU data into relevant epochs corresponding to specific HRI tasks (e.g., "reach," "handover," "collaborative assembly").
    • Apply the denoising algorithm to the raw gyroscope and accelerometer data.
  • Performance Evaluation:
    • For dynamic data, compare the integrated trajectory from the denoised IMU signal against the ground truth reference system. Metrics include Root Mean Square Error (RMSE) of position and orientation.
    • For static data, calculate the standard deviation of the signal before and after denoising. A lower standard deviation indicates better noise suppression.
    • Calculate the percentage improvement using the formula: Improvement (%) = [(Metric_raw - Metric_denoised) / Metric_raw] * 100 [56].
  • Real-time Suitability Assessment:
    • Benchmark the processing time of the denoising algorithm against the data sampling interval to ensure it can run in real-time.

Workflow Visualization and Research Toolkit

Signaling and Processing Workflow

The following diagram illustrates the integrated workflow for using auxiliary sensors and noise references in an HRI artifact removal pipeline.

IMU_Artifact_Removal cluster_sources Data Acquisition cluster_processing Data Fusion & Processing cluster_output Output & Action IMU IMU Sensor (Gyro, Accel) Sync Synchronization & Time Alignment IMU->Sync PrimarySensor Primary HRI Sensor (e.g., Camera, EEG) PrimarySensor->Sync NoiseRef Noise Reference (e.g., Static IMU) Denoise Denoising Algorithm (e.g., LWT-GA, BSS) NoiseRef->Denoise Reference Signal Preprocess Preprocessing (e.g., Filtering) Sync->Preprocess Preprocess->Denoise FeatureExtract Feature Extraction (Clean Signals) Denoise->FeatureExtract HRI_System HRI System (Robot Control, State Estimation) FeatureExtract->HRI_System

The Researcher's Toolkit: Essential Materials and Reagents

Table 3: Key Research Reagent Solutions for IMU-based HRI Studies

Item / Solution Function / Application in Research
MEMS IMU (e.g., Advanced Navigation Motus) The primary sensor for measuring linear acceleration and angular rate. Its small size, weight, and power (SWaP-C) profile make it ideal for wearable HRI applications [55].
High-Precision Motion Capture System (e.g., OptiTrack) Serves as a ground truth reference system for validating the accuracy of denoised IMU data and derived trajectories.
MATLAB or Python with Signal Processing Toolbox Software environment for implementing and testing denoising algorithms (e.g., Lifting Wavelet Transform, Blind Source Separation).
Robot Operating System (ROS) Middleware framework for synchronizing, logging, and processing multi-sensor data streams in real-time HRI experiments.
Epanechnikov Kernel-based BSS Algorithm A computational tool for artifact removal, particularly effective in separating noise from source signals in EEG and other biophysical data [7].
Genetic Algorithm Optimization Library Used to fine-tune the parameters of denoising algorithms, such as wavelet threshold levels, to maximize performance [56].

Balancing Signal Fidelity with Processing Speed

In the field of affective Human-Robot Interaction (HRI), the real-time estimation of human emotions from Electroencephalography (EEG) signals presents a critical engineering challenge: achieving an optimal balance between high signal fidelity and low processing speed [3] [2]. The development of automatic systems for patient therapy and evaluation depends on the robot's ability to adapt its behavior dynamically to a patient's changing mood, a process which requires both high accuracy in emotion classification and near-instantaneous processing [3]. This application note details the core trade-offs, quantitative performance metrics, and standardized protocols for implementing real-time artifact removal in HRI research, a cornerstone for building reliable and responsive closed-loop systems.

Core Trade-offs and Performance Metrics

The primary obstacle in real-time EEG processing is online artifact removal, which must cleanse the signal of noise without discarding valuable neural information or introducing prohibitive computational delays [3]. The most common artifacts are electro-oculographic (EOG) blinks, muscle activity, and 50 Hz background noise, each occupying different frequency bands and requiring tailored removal strategies [3] [2].

Table 1: Performance Comparison of Real-Time Artifact Removal and Classification Methods

Methodology Reported Accuracy Key Strengths Primary Trade-offs
ICA + Wavelet Analysis [3] High (Specific values not provided) Effective for EOG artifacts; works within real-time constraints. Potential loss of neural information; requires careful parameter tuning.
Mutual Information (Epanechnikov Kernel) [7] 80.13% High accuracy; lower computational cost than Gaussian kernel. Simplicity may come at the cost of robustness for highly complex artifacts.
Mutual Information (Gaussian Kernel) [7] Slightly inferior to Epanechnikov Established method with known properties. Higher computational cost than Epanechnikov kernel.
Second Order Blind Identification (SOBI) [7] ~12% lower than best MI method Classical, commonly used algorithm. Lower performance in emotion classification contexts.

A study optimizing emotion estimation for HRI demonstrated that a carefully chosen methodology could operate under real-time constraints while maintaining high accuracy in both subject-dependent and subject-independent paradigms [3] [2]. Furthermore, research on mutual information-based blind source separation revealed that the choice of algorithm directly impacts this balance; a method using an Epanechnikov kernel achieved 80.13% accuracy in emotion classification while offering a lower computational cost than a comparable Gaussian kernel approach [7]. This underscores that even within a class of algorithms, specific implementations can yield better fidelity-speed outcomes.

Detailed Experimental Protocols

Protocol 1: Real-Time EEG Emotion Estimation for HRI

This protocol is adapted from methodologies proven effective for real-time affective HRI [3] [2].

1. Objective: To clean EEG signals of artifacts and extract features for emotion classification within a time frame suitable for seamless human-robot interaction.

2. Materials & Equipment:

  • A wearable EEG system with at least 8 electrodes (recommended positions: AF3, AF4, T7, T8, TP7, TP8, P7, P8) [3].
  • A data acquisition system with a minimum sampling rate of 250 Hz.
  • A robotic platform capable of receiving and acting upon classification outputs.

3. Procedure:

  • Step 1: Signal Acquisition. Record EEG signals from the specified electrodes.
  • Step 2: Initial Filtering. Apply a 1–50 Hz bandpass filter to remove very low and high-frequency noise, including a 50 Hz (or 60 Hz) notch filter to eliminate line noise [3].
  • Step 3: Ocular Artifact Removal. Implement a real-time Independent Component Analysis (ICA) method coupled with wavelet analysis to identify and remove EOG artifacts [3].
  • Step 4: Feature Extraction. From the cleaned signal, extract stable features from the beta (16–30 Hz) and gamma (30–50 Hz) bands, as these are reported to be most effective for emotion estimation [3].
  • Step 5: Feature Smoothing. Apply a smoothing technique (e.g., Linear Dynamic Systems, Moving Average) to the feature space to reduce temporal variability [3].
  • Step 6: Classification. Use a lightweight classifier (e.g., SVM) to map the smoothed features to emotional states (e.g., positive, neutral, negative) [3] [7].
  • Step 7: Robot Action. The classified emotional state is sent to the robot to trigger a pre-programmed adaptive behavior.

4. Critical Timing Parameters: The entire pipeline, from Step 1 to Step 6, must complete processing within a single processing window (e.g., 1-2 seconds) to be considered "real-time" and to keep pace with the dynamic nature of HRI.

Protocol 2: Mutual Information-Based Artifact Removal

This protocol details a specific, high-performance artifact removal method suitable for emotion recognition [7].

1. Objective: To remove artifacts from EEG signals using Mutual Information with Epanechnikov kernel density estimation to improve emotion classification accuracy.

2. Materials & Equipment:

  • EEG data set (e.g., from emotion elicitation experiments using video stimuli).
  • Processing software capable of implementing Blind Source Separation (BSS) algorithms.

3. Procedure:

  • Step 1: Data Preprocessing. Bandpass filter the raw EEG data and perform data whitening.
  • Step 2: Blind Source Separation. Apply the Mutual Information-based BSS algorithm using an Epanechnikov kernel for probability density function estimation to decompose the EEG signals into independent components [7].
  • Step 3: Automated Artifact Identification. Use an automated classifier like the Multiple Artifact Rejection Algorithm (MARA) to label the independent components as artifact or neural signal [7].
  • Step 4: Signal Reconstruction. Remove the components identified as artifacts and reconstruct the "clean" EEG signal.
  • Step 5: Feature Extraction & Validation. Extract advanced features (e.g., High Order Crossings, Hjorth parameters) from the cleaned signal and input them into an SVM classifier to determine emotion classification accuracy [7].

The Scientist's Toolkit: Essential Research Reagents & Materials

Table 2: Key Research Reagent Solutions for Real-Time EEG Processing

Item Name Function/Application Specific Example/Note
Wearable EEG Headset Acquires brain signals in a non-invasive, user-friendly manner. Systems with pre-configured electrodes at AF3, AF4, T7, T8, TP7, TP8, P7, P8 are ideal for emotion estimation [3].
Independent Component Analysis (ICA) A blind source separation technique for decomposing signals into statistically independent components, crucial for isolating artifacts. Used as a core step in real-time EOG artifact removal when combined with wavelet analysis [3].
Mutual Information Algorithm A method for blind source separation that minimizes the mutual information between estimated components. The Epanechnikov kernel variant offers high accuracy with lower computational cost [7].
MARA (Multiple Artifact Rejection Algorithm) An automated tool for classifying independent components from ICA as artifact or brain signal. Eliminates the need for manual component inspection, facilitating real-time processing [7].
Support Vector Machine (SVM) A supervised machine learning model used for classification tasks, such as mapping EEG features to emotional states. Effective for emotion classification post-artifact removal, offering a good balance of accuracy and speed [7].

System Architecture and Workflow Visualization

The following diagram illustrates the complete real-time EEG processing pipeline for human-robot interaction, integrating both the signal processing and the closed-loop HRI components.

HRI_EEG_Pipeline cluster_0 Artifact Removal & Feature Extraction cluster_1 Classification & Robot Action Start Raw EEG Signal Filter Bandpass/Notch Filter Start->Filter ICA ICA / Mutual Info BSS Filter->ICA MARA MARA Artifact Rejection ICA->MARA FeatureExt Feature Extraction (Beta/Gamma Bands) MARA->FeatureExt Smoothing Feature Smoothing FeatureExt->Smoothing Classifier Emotion Classifier (SVM) Smoothing->Classifier RobotBrain Robot Decision & Policy Classifier->RobotBrain Action Robot Action Execution RobotBrain->Action Human Human Response Action->Human Human->Start  Closed-Loop Feedback

Real-Time EEG Processing Pipeline for HRI

Achieving a functional balance between signal fidelity and processing speed is not merely a technical optimization problem but a fundamental requirement for the success of real-time affective Human-Robot Interaction. The protocols and data presented herein provide a foundation for researchers to build upon. The choice between artifact removal methods like ICA-wavelet hybrids and mutual information-based approaches should be guided by the specific accuracy and latency demands of the target HRI application. Future work must continue to refine these models, pushing the boundaries of computational efficiency while preserving the rich information contained within neural signals to enable ever more seamless and natural human-robot collaboration.

Benchmarking Performance: Metrics, Comparisons, and Future Directions

In human-robot interaction (HRI) research, neurophysiological monitoring through electroencephalography (EEG) provides critical insight into human cognitive and affective states. These signals enable robots to adapt their behavior in real-time, fostering seamless collaboration. However, electrophysiological data are persistently contaminated by artifacts from ocular, muscular, and environmental sources, which can corrupt interpretation and derobot decision-making. Validating the performance of artifact removal algorithms is therefore paramount. This application note establishes rigorous protocols for evaluating such methods using three core metrics: Accuracy, Selectivity, and Dipolarity. These metrics collectively ensure that artifact removal preserves neural signals of interest while effectively eliminating contaminants, which is crucial for developing reliable real-time HRI systems.

Metric Definitions and Theoretical Background

Accuracy

Accuracy quantifies the fidelity of the cleaned neural signal by measuring the deviation between the processed output and an uncontaminated ground-truth signal. In real-time HRI, high accuracy is essential to prevent robots from misinterpreting a user's cognitive state [3].

The most common accuracy metric is the Root Mean Square Error (RMSE), calculated as: RMSE = √( Σ(ŷᵢ - yᵢ)² / N ) where ŷᵢ is the cleaned signal, yᵢ is the ground-truth signal, and N is the number of data points. A lower RMSE indicates higher accuracy.

Selectivity

Selectivity evaluates an algorithm's ability to isolate and remove artifacts without distorting the underlying neural activity. This is particularly important in HRI, where signals like event-related desynchronization (ERD) must be preserved to accurately decode human motor intention or cognitive load [57].

Selectivity can be assessed by comparing the power in a frequency band of interest (e.g., the alpha band for ERD) before and after artifact removal in a known paradigm.

Dipolarity

Dipolarity is a physiological plausibility check for components identified by blind source separation (BSS) methods like Independent Component Analysis (ICA). It measures how well the scalp topography of a component can be explained by a single equivalent dipole in the brain. Neural sources typically originate in the brain and thus have high dipolarity, while artifacts (from eyes, muscles, or heart) do not [57].

A common measure is the dipolarity index or the residual variance from a single-dipole fit. Components with a residual variance below a threshold (e.g., <15%) are considered "near-dipolar" and likely neural in origin.

Experimental Protocols for Metric Validation

Protocol 1: Quantitative Accuracy Assessment Using Simulated Data

This protocol is designed for a controlled, quantitative evaluation of artifact removal accuracy when a ground-truth signal is available.

  • Objective: To quantify the accuracy of an artifact removal algorithm by applying it to a dataset where a known, clean neural signal has been artificially contaminated.
  • Materials:
    • A clean EEG dataset recorded during a known cognitive paradigm (e.g., steady-state visually evoked potentials).
    • Recordings of typical artifacts (e.g., EOG from eye blinks, EMG from jaw clenching).
  • Procedure:
    • Select a clean, high-SNR epoch of neural data (S_clean).
    • Artificially mix S_clean with a recorded artifact signal (A) at a known Signal-to-Noise Ratio (SNR) to create a contaminated signal (S_contaminated).
    • Apply the artifact removal algorithm under test to S_contaminated to obtain the cleaned signal (S_cleaned).
    • Calculate the RMSE between S_cleaned and S_clean.
    • Repeat steps 2-4 for multiple SNRs and artifact types.
  • Validation Output: A table of RMSE values across different conditions.

This protocol validates selectivity using the well-established phenomenon of event-related desynchronization (ERD) in a real data scenario where absolute ground truth is unavailable.

  • Objective: To ensure the artifact removal process preserves a genuine neural oscillation phenomenon (ERD) while removing muscle artifacts.
  • Materials:
    • EEG data from a self-paced motor task (e.g., foot movements) known to induce beta-band ERD over the sensorimotor cortex [57].
  • Procedure:
    • Pre-process the raw data (band-pass filter 2-45 Hz).
    • Apply the artifact removal algorithm.
    • For both raw and cleaned data, calculate the time-frequency representation aligned to the movement onset.
    • Compute the ERD in the beta band (15-30 Hz) as the percentage power decrease relative to a baseline period.
    • Compare the topography and magnitude of the ERD between the raw and cleaned data. A successful method will reduce high-frequency muscle noise while maintaining or enhancing the focal ERD signature.
  • Validation Output: Time-frequency plots and topographic maps of beta-band power demonstrating preserved ERD after cleaning.

Protocol 3: Dipolarity Assessment of Independent Components

This protocol is used to validate the components classified as "neural" by a BSS algorithm, ensuring they correspond to physiologically plausible brain sources.

  • Objective: To evaluate the quality of the source separation by calculating the dipolarity of the resulting independent components.
  • Materials:
    • EEG data processed through a BSS algorithm (e.g., extended Infomax ICA).
    • A forward head model (e.g., a boundary element model based on a standard MRI).
  • Procedure:
    • Perform ICA on the high-pass filtered (>2 Hz) EEG data.
    • For each independent component, obtain its scalp topography (the corresponding column of the mixing matrix).
    • Fit a single equivalent dipole to the component's topography using the head model.
    • Calculate the residual variance (RV) not explained by the single-dipole model.
    • Classify components with RV < 15% as being of neural origin. The proportion of such "near-dipolar" components among all retained components is a key quality metric [57].
  • Validation Output: A list of components with their residual variance values and scalp topography maps with fitted dipoles.

Data Presentation and Analysis

The following tables summarize quantitative data from benchmark studies, providing a reference for expected performance in HRI research.

Table 1: Comparative Performance of Blind Source Separation (BSS) Methods on Real EEG Data During a Foot Movement Task (n=18 subjects). Performance is measured by the ability to reduce muscle artifacts while preserving the Event-Related Desynchronization (ERD). Adapted from [57].

BSS Method Artifact Reduction Efficacy ERD Preservation Quality Key Advantage
Extended Infomax High High Best overall performance in the comparison
FastICA High Medium-High Robust for super-Gaussian sources
TDSEP/SOBI High Medium-High Exploits temporal structure
Fourier-ICA High Medium Optimized for oscillatory sources
Spatio-Spectral Decomposition (SSD) High Medium Maximizes SNR in a band of interest

Table 2: Example Residual Variance (Dipolarity) and Classification Outcomes for Independent Components from a Single Subject. Components with RV < 15% are classified as neural.

Component Index Residual Variance (%) Proposed Classification Notes
IC 1 4.2% Neural Frontal theta, retained
IC 2 78.5% Artifact Lateral eye movement, rejected
IC 3 9.1% Neural Mu rhythm, retained
IC 4 95.2% Artifact Muscle artifact, rejected
IC 5 6.8% Neural Occipital alpha, retained

The Scientist's Toolkit: Research Reagent Solutions

Table 3: Essential Software and Analytical Tools for Validating Artifact Removal in HRI Research.

Tool / Resource Function Application in Protocol
EEGLAB An open-source MATLAB toolbox for processing electrophysiological data. Core platform for running ICA, dipole fitting, and calculating time-frequency features [57].
BCILAB A toolbox for building brain-computer interface models. Useful for prototyping and testing real-time classification pipelines within HRI systems.
ICMARC An automated independent component classifier. Automates the classification of ICA components as neural or artifactual based on multiple features, including dipolarity [57].
DIPFIT A plugin within EEGLAB for equivalent dipole modeling. Directly calculates the residual variance for dipolarity assessment of components (Protocol 3) [57].
Boundary Element Model (BEM) A volume conduction head model. Serves as the forward model for the DIPFIT toolbox to compute the single-dipole fit.

Workflow and Logical Visualization

G Start Start: Raw EEG Data Preproc Preprocessing (Bandpass Filter, e.g., 2-45 Hz) Start->Preproc BSS Blind Source Separation (e.g., Extended Infomax ICA) Preproc->BSS CompClass Component Classification BSS->CompClass DipolarityCheck Dipolarity Assessment (Residual Variance < 15%) CompClass->DipolarityCheck Neural Neural Component? DipolarityCheck->Neural Neural->CompClass No Reconstruct Reconstruct Signal (Exclude Artifactual Components) Neural->Reconstruct Yes Output Output: Cleaned EEG Signal Reconstruct->Output Val1 Validation: Accuracy (RMSE) (Protocol 1) Output->Val1 Val2 Validation: Selectivity (ERD) (Protocol 2) Output->Val2

Figure 1: Workflow for EEG Artifact Removal and Key Validation Points

G Metric1 Accuracy Need1 HRI Need: Trust & Safety Metric1->Need1 Ensures Metric2 Selectivity Need2 HRI Need: Natural Interaction Metric2->Need2 Enables Metric3 Dipolarity Need3 HRI Need: Real-Time Adaptation Metric3->Need3 Supports App1 Enables correct estimation of user emotion & intent Need1->App1 App2 Preserves neural correlates of cognitive state Need2->App2 App3 Ensures robot actions are based on valid brain signals Need3->App3

Figure 2: Relationship Between Core Metrics and HRI System Requirements

Electroencephalography (EEG) is a crucial tool for studying brain dynamics in real-world settings, including human-robot interaction (HRI). However, motion artifacts significantly compromise EEG signal quality, making robust artifact removal essential for generating reliable neural data. This application note provides a structured comparison of three prominent artifact removal methods—Independent Component Analysis (ICA), Artifact Subspace Reconstruction (ASR), and iCanClean—focusing on their efficacy in mobile scenarios relevant to HRI research.

Independent Component Analysis (ICA)

ICA is a blind source separation technique that decomposes multi-channel EEG data into statistically independent components, which are then classified and manually inspected to remove artifactual sources [58] [59]. The Adaptive Mixture ICA (AMICA) algorithm is recognized as one of the most powerful variants, though it is computationally intensive [59]. ICA operates as a stationary method, assuming a single spatial filter applies throughout the recording, making it highly effective for separating constant, fixed-source artifacts like eye blinks and muscle activity [60]. However, its performance depends on data stationarity, and it struggles with non-stationary, high-amplitude motion artifacts [58] [60].

Artifact Subspace Reconstruction (ASR)

ASR is an automated, real-time capable method designed to remove large-amplitude, non-stationary artifacts using a sliding-window approach [46] [58] [60]. Based on principal component analysis (PCA), ASR calibrates on a clean baseline data segment. It then identifies and reconstructs artifact-dominated subspaces in new data that exceed a user-defined standard deviation threshold (parameter "k") [46] [60]. ASR is particularly effective for removing transient, high-amplitude artifacts such as motion-induced spikes and cable sway, thereby improving data stationarity for subsequent ICA decomposition [58] [60].

iCanClean

iCanClean is a novel framework that uses canonical correlation analysis (CCA) with reference noise signals to identify and subtract noisy subspaces from EEG data [6] [46] [36]. It can operate with dedicated noise sensors (e.g., in a dual-layer electrode setup) or generate pseudo-reference noise signals from the EEG itself by applying a temporary notch filter [46]. A key parameter is the R² cleaning aggressiveness, which determines the correlation threshold for noise removal [46] [36]. iCanClean is designed for real-time implementation and effectively handles multiple simultaneous artifact types, including motion, muscle, and eye artifacts [6].

Performance Comparison and Quantitative Data

The following tables summarize key performance metrics and characteristics from empirical studies.

Table 1: Performance Comparison on Phantom and Human Data

Metric iCanClean ASR ICA Notes & Context
Data Quality Score (Phantom) 55.9% 27.6% N/A "Brain + All Artifacts" condition; baseline was 15.7% [6]
Good ICs After Cleaning 13.2 Varies 8.4 (pre-cleaning) iCanClean increased viable brain components by 57% [36]
ERP Congruency Effect Identified Identified N/A During running; P300 effect found with iCanClean & ASR [46]
Single-Trial Classification N/A Effective Effective ASR+ICA pipeline outperformed minimal cleaning (69% vs 55%) [58]
Optimal Parameters R²=0.65, 4s window [36] k=20-30 [60] AMICA with sample rejection [59] Parameter sweeps identified optimal settings

Table 2: Method Characteristics and Applicability

Characteristic ICA ASR iCanClean
Primary Strength Separates fixed-source artifacts Removes transient, high-amplitude bursts Removes multiple, co-occurring artifact types
Real-Time Capability Low (offline) High High
Computational Speed Slow (hours) [6] Fast Fast
Hardware Dependency Standard EEG Standard EEG Optimal with dual-layer EEG [46] [36]
Key Limitation Requires stationarity; slow Requires clean calibration data Performance depends on noise reference quality

Detailed Experimental Protocols

Protocol 1: Validating with a Phantom Head Setup

This methodology, used to validate iCanClean, provides ground-truth data for quantitative comparisons [6] [34].

  • Apparatus: Create an electrically conductive phantom head with embedded artificial brain sources (e.g., 10 sources) and contaminating sources (e.g., 10 sources for eyes, neck muscles, facial muscles, walking motion, and line-noise) [6] [34].
  • Data Collection: Record high-density EEG (100+ channels) under multiple conditions: Brain only, and Brain combined with individual or all artifact types [6] [34].
  • Processing and Analysis:
    • Apply each cleaning algorithm (iCanClean, ASR, Auto-CCA, Adaptive Filtering) to all conditions.
    • Calculate a Data Quality Score (0-100%) as the average correlation between the known ground-truth brain source signals and the cleaned EEG channels [6] [34].
    • Compare the post-cleaning Data Quality Scores across methods and against the "Brain only" baseline (e.g., 57.2%) [6].

Protocol 2: Assessing Performance During Human Locomotion

This protocol is designed for evaluating methods on human data during tasks like walking or running, relevant for dynamic HRI [46].

  • Experimental Design: Use a dual-task paradigm. Participants perform a cognitive task (e.g., Flanker task for P300 ERPs) while either standing (control) and actively moving (e.g., jogging) [46].
  • EEG Acquisition: Record high-density EEG (e.g., 120 channels). Using a dual-layer setup (120 scalp electrodes + 120 noise electrodes) is ideal for iCanClean [36]. Otherwise, pseudo-reference signals can be generated [46].
  • Processing and Analysis:
    • Preprocess the data from the movement condition with each method.
      • iCanClean: Use a 4-second window and R² aggressiveness of 0.65 [46] [36].
      • ASR: Use a recommended k threshold of 20-30 [60].
      • ICA: Perform AMICA decomposition, optionally with 5-10 iterations of integrated sample rejection [59].
    • Run ICA on all preprocessed datasets to obtain independent components (ICs).
    • Evaluate outcomes using:
      • Component Dipolarity: Count the number of "good" ICs (residual variance < 15% and high brain probability) [46] [36].
      • Spectral Power: Reduce power at the gait frequency and its harmonics [46].
      • ERP Analysis: Recover expected ERP components (e.g., P300 congruency effect) comparable to the static condition [46].

Protocol 3: Pipeline Optimization for Extreme Motion

This protocol tests robust pipeline combinations for highly dynamic scenarios like sports [58].

  • Task: Subjects perform a demanding physical activity (e.g., skateboarding on a half-pipe) while responding to randomly presented auditory stimuli [58].
  • Analysis:
    • Process the data through different pipelines: Minimal cleaning, ASR only, ICA only, ICA followed by ASR (ICAASR), and ASR preceding ICA (ASRICA) [58].
    • For each pipeline, use a support vector machine (SVM) to classify the presence or absence of an auditory stimulus in single-trial EEG data [58].
    • The pipeline that yields the highest single-trial classification accuracy is considered most effective at preserving brain signals amidst extreme artifacts [58].

Workflow Visualization

The following diagram illustrates the typical data processing flow and the role of each artifact removal method.

G cluster_real_time Real-Time Capable Methods RawEEG Raw EEG Data Preproc Basic Preprocessing (Bandpass Filter, etc.) RawEEG->Preproc ASRNode ASR Processing (Removes non-stationary bursts) Preproc->ASRNode For motion-heavy data iCanCleanNode iCanClean Processing (Subtracts noise subspaces) Preproc->iCanCleanNode With noise references ICANode ICA Decomposition (Separates fixed sources) Preproc->ICANode For low-motion data ASRNode->ICANode Stationarized data iCanCleanNode->ICANode Pre-cleaned data CleanComps Component Classification (e.g., ICLabel) ICANode->CleanComps ArtRej Artifactual Component Rejection CleanComps->ArtRej CleanData Cleaned EEG Data ArtRej->CleanData

Artifact Removal Processing Workflow: This diagram illustrates how ICA, ASR, and iCanClean can be integrated into an EEG processing pipeline. ASR and iCanClean, being real-time capable, are often used as a preprocessing step before ICA, especially for data with significant motion artifacts. They remove large, non-stationary bursts, thereby creating a more stationary data set that improves the subsequent ICA decomposition [58] [60]. iCanClean can be used in place of ASR when reliable noise references are available [46] [36].

The Scientist's Toolkit: Essential Research Reagents

Table 3: Key Materials and Tools for Mobile EEG Artifact Removal Research

Tool / Material Function & Application Example / Note
High-Density EEG System Acquires neural data; essential for effective source separation with ICA. 100+ channels recommended for mobile studies [6].
Dual-Layer EEG Cap Provides dedicated noise references for iCanClean. 120 scalp electrodes + 120 noise electrodes [36].
Phantom Head Apparatus Validates cleaning algorithms with known ground-truth signals. Contains embedded artificial brain and artifact sources [6] [34].
Inertial Measurement Units (IMUs) Captures motion dynamics; can be used as reference for artifact removal. Head-mounted IMUs to correlate motion with EEG artifacts [35].
AMICA Algorithm Performs high-quality ICA decomposition, robust to some data imperfections. Can be run with integrated sample rejection [59].
EEGLAB + clean_rawdata Plugin Implements ASR and other cleaning functions within a standard EEG analysis environment. Default ASR parameter (k) of 20-30 is recommended [60].

For human-robot interaction research requiring real-time artifact removal:

  • iCanClean is the leading choice when dedicated noise sensors are available, demonstrating superior performance in removing multiple, concurrent artifacts [6] [46].
  • ASR is a highly effective and practical alternative using standard EEG systems, particularly for suppressing high-amplitude motion bursts and improving ICA decompositions [46] [58] [60].
  • ICA remains essential for identifying and removing fixed-source artifacts but should be preceded by ASR or iCanClean in mobile scenarios to handle non-stationary noise [58] [60].

The combination of ASR followed by ICA (ASRICA) presents a robust and widely accessible pipeline for preprocessing EEG data in movement-intensive HRI studies [58].

This application note investigates a critical challenge in human-robot interaction (HRI): the degradation of emotion recognition accuracy due to signal artifacts. For HRI to be truly effective and natural in real-world settings, robots must reliably infer human emotional states. However, this process is often compromised by artifacts—unwanted noise originating from motion, physiological sources, or instrumentation—that corrupt the biological signals used for affective computing. This study provides a quantitative comparison of emotion recognition performance with and without specialized artifact removal procedures, detailing the experimental protocols and reagent solutions necessary to implement these methods in real-time HRI research.

Comparative Analysis of Recognition Accuracy

The implementation of artifact removal protocols leads to substantial improvements in the accuracy of emotion recognition systems. The following table summarizes quantitative findings from key studies, comparing performance with and without artifact removal.

Table 1: Impact of Artifact Removal on Emotion Recognition Accuracy

Modality Artifact Removal Method Classifier / Model Accuracy Without Removal Accuracy With Removal Performance Gain Citation
EEG Mutual Information (Epanechnikov Kernel) SVM with HOC & Hjorth Features ~68% (estimated) 80.13% ~12% [7]
EEG Mutual Information (Gaussian Kernel) SVM with HOC & Hjorth Features ~68% (estimated) 78.50% ~10.5% [7]
EEG SOBI (Baseline Method) SVM with HOC & Hjorth Features ~68% (estimated) 75.70% ~7.7% [7]
EEG Real-time Ocular Artifact Removal Subject-Dependent & Independent Models Not Reported Maintained High Accuracy Enabled Real-Time Operation [3]
Multimodal (Audio, Text, Motion) N/A (Multimodal Fusion) Multimodal Fusion Network N/A 71.04% (on IEMOCAP) Outperforms single modalities [61]

As the data demonstrates, advanced artifact removal can directly increase classification accuracy by over 10% in EEG-based systems [7]. Furthermore, the choice of algorithm is significant, with the Mutual Information method using an Epanechnikov kernel outperforming both the Gaussian kernel variant and the classical SOBI algorithm [7]. Beyond direct accuracy gains, these methods are essential for transitioning from offline analysis to real-time emotion estimation, a prerequisite for dynamic HRI [3].

For context, multimodal fusion approaches, which integrate several clean signal streams (e.g., audio, text, and motion), currently represent the state-of-the-art, achieving the highest benchmark accuracy on standard datasets [61]. This underscores that artifact removal on individual modalities is a foundational step toward robust multimodal systems.

Detailed Experimental Protocols

To ensure reproducibility and facilitate adoption in HRI research, this section outlines detailed protocols for the featured experiments.

Protocol 1: EEG-Based Emotion Recognition with Mutual Information Artifact Removal

This protocol is adapted from the work of Grilo et al. [7], which provides a clear pipeline from raw data to classified emotion.

3.1.1 Objective: To accurately classify human emotions from EEG signals by implementing a novel Blind Source Separation (BSS) algorithm for artifact removal.

3.1.2 Materials: The "Research Reagent Solutions" and key materials required for this protocol are listed below.

Table 2: Research Reagent Solutions for EEG Emotion Recognition

Item Name Function / Description Example / Specification
EEG Acquisition System Records brain electrical activity; requires non-invasive, multi-electrode setup. A system from a university hospital, e.g., 64-channel setup [7].
Emotion Elicitation Stimuli Standardized audio-visual materials to induce target emotions for ground-truth labeling. Videos from a standardized database designed to elicit happiness and disgust [7].
Signal Processing Library Software toolkit for implementing BSS, feature extraction, and classification algorithms. Custom MATLAB or Python scripts for Mutual Information BSS, HOC, and Hjorth parameters.
Self-Assessment Manikin (SAM) A questionnaire for participants to self-report valence and arousal levels, providing emotion labels. 9-point pictorial scale based on Russell's circumplex model [7].
Support Vector Machine (SVM) A supervised machine learning model for classifying extracted features into emotion categories. SVM with linear or RBF kernel, as implemented in scikit-learn or a similar library.

3.1.3 Procedure:

  • Data Acquisition & Elicitation:
    • Recruit participants and apply the EEG cap according to the 10-20 international system.
    • Present a series of video stimuli to elicit specific emotional states (e.g., happiness, disgust).
    • After each stimulus, administer the SAM questionnaire to collect subjective ground-truth labels for valence and arousal.
  • Pre-processing:
    • Apply a band-pass filter (e.g., 1-50 Hz) to the raw EEG data to remove slow drifts and high-frequency noise.
    • Pre-whiten the signals as required for the subsequent BSS algorithms.
  • Artifact Removal (Blind Source Separation):
    • Decompose the pre-processed EEG signals into independent latent components using a Mutual Information-based BSS algorithm. Two kernels should be compared:
      • Epanechnikov Kernel: A second-order polynomial offering lower computational cost.
      • Gaussian Kernel: A common default choice for probability density estimation.
    • Automatically identify and flag components corresponding to artifacts (e.g., ocular, muscle) using the MARA (Multiple Artifact Rejection Algorithm) tool.
    • Reconstruct the "clean" EEG signal by removing the artifact-contributing components.
  • Feature Extraction:
    • From the cleaned EEG signals, compute informative features in the time domain.
    • Extract Hjorth parameters (Activity, Mobility, Complexity) which describe the signal's statistical properties.
    • Calculate High Order Crossing (HOC) features, which capture the oscillatory pattern of the signal.
  • Classification & Validation:
    • Train a Support Vector Machine (SVM) classifier using the extracted HOC and Hjorth features from a subset of the data.
    • Validate the model's performance on a held-out test set, reporting accuracy, precision, and recall. Compare the results obtained with different BSS kernels.

The following workflow diagram illustrates the key steps of this protocol:

G RawEEG Raw EEG Signal Acquisition PreProcess Pre-processing (Band-pass Filter & Pre-whitening) RawEEG->PreProcess BSS Artifact Removal (Mutual Information BSS) PreProcess->BSS Features Feature Extraction (HOC & Hjorth Parameters) BSS->Features SVM Emotion Classification (SVM) Features->SVM Result Emotion Label Output SVM->Result

Diagram 1: Workflow for EEG-based Emotion Recognition with Artifact Removal.

Protocol 2: Real-Time EEG Processing for HRI

This protocol is optimized for real-time human-robot interaction scenarios, where low latency is critical [3].

3.2.1 Objective: To estimate a user's emotional state from EEG with minimal delay, enabling dynamic robot response.

3.2.2 Key Adaptations for Real-Time Operation:

  • Lightweight Artifact Removal: Implement optimized, computationally efficient algorithms for ocular artifact removal, such as real-time Independent Component Analysis (ICA) combined with wavelet analysis, to run within the system's timing constraints [3].
  • Focused Electrode Montage: Reduce the number of electrodes from a full 64-channel setup to a targeted subset (e.g., 8 electrodes: AF3, AF4, T7, T8, TP7, TP8, P7, P8) that cover brain regions most relevant to emotion (prefrontal and temporal lobes), thereby decreasing data throughput and processing load [3].
  • Feature Smoothing: Apply a feature smoothing technique, such as a Moving Average filter or Linear Dynamic Systems (LDS), to the extracted EEG features. This reduces trial-to-trial variability and stabilizes the emotion estimation output over time [3].
  • Model Optimization: Select machine learning models that balance accuracy with computational speed, avoiding overly complex deep learning architectures that require extensive hyper-parameter tuning and longer processing times.

The streamlined dataflow for this real-time system is shown below:

G RT_EEG Real-time EEG Stream (Reduced Electrode Set) LightAR Lightweight Artifact Removal RT_EEG->LightAR FastFeature Fast Feature Extraction & Smoothing (LDS) LightAR->FastFeature OptModel Optimized Classifier FastFeature->OptModel HRI Real-time Emotion Estimate for Robot Behavior OptModel->HRI

Diagram 2: Real-time EEG processing workflow for HRI applications.

The Scientist's Toolkit

Successful implementation of the aforementioned protocols requires a suite of specialized tools and datasets. The following table catalogs essential resources for researchers in this field.

Table 3: Essential Research Tools and Datasets for HRI Emotion Recognition

Category Item Specific Use-Case
Datasets IEMOCAP (Interactive Emotional Dyadic Motion Capture) Benchmarking multimodal (audio, visual, text) emotion recognition systems [61].
SEED Evaluating EEG-based emotion recognition methodologies [3].
Hardware Wearable EEG Headsets (e.g., from OpenBCI) Mobile, real-time acquisition of brain activity for naturalistic HRI studies [42].
RGB-D Cameras (e.g., Microsoft Kinect) Capturing facial expressions and body kinematics for visual emotion recognition [62].
Software/Algorithms MARA (Multiple Artifact Rejection Algorithm) Automated identification and rejection of artifact components from decomposed EEG signals [7].
CNNs & Transformers High-accuracy model architectures for facial expression and speech emotion recognition [61] [63].
Mutual Information BSS Advanced artifact removal for EEG signals, with Epanechnikov kernel for optimal performance [7].

This case study unequivocally demonstrates that dedicated artifact removal is not merely a pre-processing step but a critical determinant of performance in emotion-aware HRI systems. The data shows that advanced techniques like Mutual Information BSS can improve EEG-based emotion recognition accuracy by over 10%, making the difference between a unreliable and a functional system [7]. The provided protocols and toolkit offer a clear pathway for researchers to integrate these methods, thereby enhancing the robustness, realism, and effectiveness of human-robot interactions. Future work will focus on standardizing these protocols across diverse populations and robotic platforms, and on further optimizing algorithms for low-power, real-time operation.

Evaluating Performance in Subject-Dependent vs. Subject-Independent Paradigms

The accurate interpretation of physiological signals is fundamental to advancing human-robot interaction (HRI). A critical methodological consideration in this domain is the choice between subject-dependent and subject-independent paradigms for building computational models. This distinction governs how models are trained, validated, and ultimately deployed in real-world HRI scenarios, where real-time processing and adaptability are paramount. The choice of paradigm directly influences the system's ability to generalize across users and its requirement for individualized calibration data, presenting a key trade-off between performance and practicality. This application note provides a structured comparison of these paradigms, detailing their respective performances, outlining standardized protocols for their implementation, and discussing their specific implications for HRI research involving real-time artifact removal.

The table below synthesizes key quantitative findings from empirical studies comparing subject-dependent and subject-independent model performance across EEG-based emotion recognition, human activity recognition (HAR), and motor imagery (MI) tasks.

Table 1: Comparative Performance of Subject-Dependent vs. Subject-Independent Models

Study Context Metric Subject-Dependent Model Performance Subject-Independent Model Performance Citation
Human Activity Recognition (HAR) Relative Performance Person-Specific Models (PSMs) outperform Person-Independent Models (PIMs) by 43.5% PIMs outperform PSMs by 55.9% [64] [65]
EEG-based Emotion Recognition Viability Achieves high accuracy on the SEED database under real-time constraints Maintains high accuracy on the SEED database, proving viable for cross-subject applications [2] [3]
EEG Artifact Removal Approach Subject-dependent artifact removal (SD-AR) enhances classifier performance, especially in subjects with poor motor imagery skills Standardized, generalized artifact removal methods are commonly used (e.g., ICA, Surface Laplacian) [66]
Error-Related Potential (ErrP) Detection Focus Often yields higher within-subject accuracy Critical for real-world HRI; focus on developing robust cross-subject classifiers [26]

Experimental Protocols for Paradigm Evaluation

To ensure reproducible and comparable results in HRI research, adhering to standardized experimental protocols for both paradigms is essential. The following sections detail the methodologies for dataset partitioning and model evaluation.

Protocol 1: Subject-Dependent Model Evaluation

This protocol is designed to assess model performance for a specific individual, maximizing the use of that subject's data.

1. Research Reagent Solutions

  • EEG Recording System: A wearable EEG headset with a predefined electrode montage (e.g., focusing on temporal and prefrontal lobes: AF3, AF4, T7, T8, TP7, TP8, P7, P8 for emotion estimation) [2] [3].
  • Stimulus Presentation Software: Software for delivering emotion-eliciting stimuli (e.g., video clips from the SEED database) or HRI task protocols [2] [3] [26].
  • Computing Environment: A computing system capable of real-time signal processing, featuring libraries for signal filtering (e.g., IIR notch filters for 50 Hz noise), feature extraction (e.g., HOC, Hjorth features), and machine learning (e.g., SVM, LDA classifiers) [2] [7].

2. Step-by-Step Methodology

  • Step 1: Data Acquisition from Target Subject. Record physiological data (e.g., EEG) from a single subject across multiple sessions while they engage in the designed HRI task or stimulus protocol.
  • Step 2: Subject-Dependent Preprocessing. Apply artifact removal techniques tuned to the individual. This may involve using the Subject-dependent Artifact Removal (SD-AR) approach, which selectively applies Surface Laplacian Filtering and Independent Component Analysis (ICA) based on the classifier accuracy achieved for that specific subject [66].
  • Step 3: Feature Extraction and Smoothing. Extract stable features from the cleaned data across time, frequency, and complexity domains. Apply feature smoothing techniques (e.g., Linear Dynamic Systems, Savitzky-Golay filter) to reduce session-wise variability [2] [3].
  • Step 4: Subject-Dependent Partitioning. For the single subject's data, split the collected trials into training and testing sets using a Leave-Trials-Out Cross-Validation scheme. This ensures that data from the same session can be in both training and test sets, but specific trials are held out for testing [64].
  • Step 5: Model Training and Validation. Train a classifier (e.g., SVM) on the training trials and evaluate its performance on the held-out test trials from the same subject. Repeat this process across multiple cross-validation folds.
  • Step 6: Performance Calculation. Calculate the mean accuracy, F1-score, and other relevant metrics across all folds to represent the subject-dependent performance.
Protocol 2: Subject-Independent Model Evaluation

This protocol evaluates a model's ability to generalize to new, unseen users, which is critical for plug-and-play HRI systems.

1. Research Reagent Solutions

  • Multi-User Dataset: A dataset containing recordings from numerous subjects (e.g., SEED for emotion, OpenBMI for MI, PhysioNet for HAR) is essential [2] [64] [37].
  • Advanced Artifact Removal Tools: Software for implementing blind source separation methods (e.g., Mutual Information-based BSS with Epanechnikov kernel, SOBI) or deep learning-based denoising (e.g., AnEEG, GAN-based models) that do not rely on subject-specific templates [7] [37].
  • Machine Learning Framework: A framework supporting person-independent models (PIMs) and ensemble methods like κ-weighted Ensembles of Person-Specific Models (EPSMs) [64] [65].

2. Step-by-Step Methodology

  • Step 1: Aggregate Multi-Subject Data. Compile a dataset with recorded signals from a large and diverse pool of subjects.
  • Step 2: Generalized Preprocessing. Apply standardized, subject-independent artifact removal to all data. For example, use a mutual information-based algorithm with an Epanechnikov kernel for BSS, followed by an automated tool like MARA to identify and remove artifact components [7].
  • Step 3: Feature Extraction and Scaling. Extract a common set of features from all subjects. Scale the features using methods that are robust to outliers, fitting the scaler only on the training data to avoid information leakage [2].
  • Step 4: Leave-One-Subject-Out (LOSO) Cross-Validation. Iteratively designate data from one subject as the test set, and data from all remaining subjects as the training set. This is the gold standard for estimating subject-independent performance [64] [26].
  • Step 5: Model Training and Testing. In each iteration, train a model on the data from all but the left-out subject. Evaluate the trained model on the completely unseen data of the left-out subject.
  • Step 6: Aggregate Performance Calculation. Pool the predictions from all LOSO folds and compute overall performance metrics (Accuracy, F1-score, etc.). This provides a realistic estimate of how the model will perform for a new user.

Workflow and Decision Pathway

The following diagram illustrates the logical workflow and key decision points for selecting and implementing the appropriate modeling paradigm within an HRI research pipeline.

G Start Start: Define HRI Research Objective P1 Can you collect calibration data from each end-user? Start->P1 P2 Is the primary goal personalization or generalization? P1->P2 No SD Subject-Dependent Paradigm P1->SD Yes P2->SD Personalization SI Subject-Independent Paradigm P2->SI Generalization P3 Is computational efficiency a critical constraint? Proc1 Follow Protocol for Subject-Dependent Evaluation P3->Proc1 Yes (PSMs are lighter) P3->Proc1 No SD->P3 Proc2 Follow Protocol for Subject-Independent Evaluation SI->Proc2

Application in Real-Time Artifact Removal for HRI

The choice between subject-dependent and subject-independent paradigms profoundly impacts the design of real-time artifact removal systems in HRI.

  • Subject-Dependent Artifact Removal: In this paradigm, artifact removal can be highly personalized. The SD-AR approach demonstrates that tailoring the preprocessing pipeline (e.g., selectively applying Spatial Laplacian and ICA based on individual user performance) significantly improves Motor Imagery classification, particularly for users with poor skills [66]. This is highly relevant for therapeutic HRI applications where a robot interacts repeatedly with a single patient, and maximum performance for that individual is the priority.

  • Subject-Independent Artifact Removal: For dynamic HRI where a robot must interact with multiple unknown users, a generalized, subject-independent artifact removal method is mandatory. Techniques like the Mutual Information-based BSS with Epanechnikov kernel [7] or deep learning models like AnEEG [37] are trained on large, diverse datasets to remove common artifacts (EOG, EMG) without requiring user-specific calibration. This enables a "plug-and-play" interaction, which is essential for robots in public spaces or industrial settings with rotating shifts.

Furthermore, for implicit communication channels like Error-Related Potentials (ErrPs)—where a robot must detect a user's perceived error without explicit command—developing robust subject-independent classifiers is a primary focus. This ensures the robot can adapt to new users immediately, a cornerstone of fluid and intuitive HRI [26].

The evaluation of subject-dependent versus subject-independent paradigms reveals a fundamental trade-off between personalized accuracy and broad generalization. Subject-dependent models, exemplified by PSMs, offer superior performance for individual users, making them ideal for dedicated assistive or rehabilitative HRI. Conversely, subject-independent models, or PIMs, provide the generalization necessary for scalable, multi-user applications. The decision framework and standardized protocols provided here offer researchers a clear pathway for selecting, implementing, and evaluating the appropriate paradigm. As HRI evolves, hybrid approaches—such as initial subject-independent models that gradually personalize over time—present a promising avenue for future research, aiming to bridge the performance gap while maintaining the practicality essential for real-world deployment.

Gaps in the Literature and Standardization Challenges

Affective human-robot interaction (HRI) requires robust and real-time analysis of user states, often leveraging electroencephalography (EEG) due to its high temporal resolution and non-invasive nature [3] [67]. However, the practical deployment of these systems is severely hampered by a critical, unsolved problem: the effective and standardized removal of artifacts from EEG signals in real-time scenarios. Artifacts—unwanted noise originating from ocular movements, muscle activity, or environmental sources—corrupt the neural signal, leading to misinterpretation of brain patterns and unreliable robot control or emotion estimation [3] [37] [19]. While numerous artifact removal techniques exist, the field is characterized by a lack of standardization, inconsistent performance reporting, and significant methodological gaps, particularly for online, closed-loop HRI systems. This document outlines the primary gaps in the literature and the associated standardization challenges, providing application notes and experimental protocols to guide future research in this critical area.

Identified Gaps in the Literature

A systematic analysis of current research reveals several interconnected gaps that impede the development of robust real-time artifact removal pipelines for HRI.

Table 1: Key Gaps in Real-Time EEG Artifact Removal for HRI

Gap Category Specific Challenge Implication for HRI
Algorithm Selection & Validation Lack of consensus on optimal algorithms for real-time, low-channel count systems [31]. Pipelines optimized for high-density lab EEG may fail with wearable HRI setups.
Real-Time Performance & Computational Cost Inadequate reporting of processing latency and computational demands [3]. Methods are not evaluable for true real-time (low-latency) application on edge devices.
Generalizability Across Contexts Poor cross-session and cross-subject performance of artifact removal methods [3]. Models require frequent recalibration, disrupting seamless interaction.
Artefact-Specific Pipeline Optimization Rare use of tailored pipelines for specific artifact categories (ocular, motion, etc.) [31]. A one-size-fits-all approach reduces efficiency and risks removing neural information.
Integration with Downstream HRI Tasks Disconnect between artifact removal performance and its impact on final task accuracy (e.g., emotion classification, robot control) [7]. High artifact removal metrics do not guarantee improved HRI performance.
The Wearable EEG Challenge

The shift towards wearable EEG with dry electrodes and low channel counts creates specific problems. Techniques like Independent Component Analysis (ICA), a staple in conventional EEG processing, are less effective with reduced spatial information [31]. A recent review highlights that only a few studies explicitly address the peculiarities of artifacts in wearable systems, which differ from those in high-density lab settings [31]. Furthermore, auxiliary sensors (e.g., IMUs) that could enhance artifact detection under ecological conditions are still significantly underutilized [31].

The Real-Time Processing Hurdle

For HRI, "real-time" implies a strict upper bound on processing latency. Many studies propose advanced methods, including deep learning models like Generative Adversarial Networks (GANs) with LSTM layers [37] or mutual information-based Blind Source Separation (BSS) [7], but fail to quantify their processing time and computational footprint adequately. Without this data, it is impossible to judge their suitability for a closed-loop interaction running on portable hardware.

Standardization and Benchmarking Challenges

The absence of common standards makes it difficult to compare methods and reproduce results, slowing down collective progress.

Table 2: Standardization Challenges in Artifact Removal Research

Standardization Area Current Status Recommended Direction
Performance Metrics Inconsistent use of metrics (e.g., NMSE, RMSE, CC, SNR, SAR) [37]. Mandatory reporting of a core set of metrics, including task-relevant classification accuracy.
Public Datasets Scarcity of public, high-quality datasets containing motion artifacts from HRI-relevant scenarios [31]. Community effort to create and share benchmark datasets with various artifacts and HRI tasks.
Reporting of Computational Cost Rarely reported [3]. Mandatory reporting of latency (per sample/epoch) and computational load.
Validation Paradigms Over-reliance on offline, within-subject validation [19]. Promotion of online, subject-independent, and cross-session validation protocols.
Definition of "Clean" Signal No gold standard for ground truth in real data [37]. Clear documentation of the method used to establish the reference signal (e.g., expert annotation, semi-simulation).

Detailed Experimental Protocols

To address these gaps, researchers should adopt rigorous and standardized experimental protocols.

Protocol for Benchmarking Artifact Removal Methods

This protocol is designed for the comparative evaluation of different artifact removal algorithms in an HRI context.

Objective: To quantitatively compare the performance and real-time capability of multiple artifact removal algorithms (e.g., ICA, ASR, wavelet-based, deep learning) using a shared dataset and evaluation framework.

Materials and Reagents:

  • EEG System: A wearable EEG system with ≤16 channels (e.g., dry or semi-wet electrodes).
  • Auxiliary Sensors: (Recommended) EOG sensors, IMUs.
  • Computing Platform: A specified computer or edge device with documented computational resources (CPU, GPU, RAM).
  • Software: Python/MATLAB with toolboxes (EEGLAB, BCILAB, MNE) and the algorithms under test.
  • Dataset: A publicly available dataset containing EEG with artifacts from HRI-relevant tasks, or data collected in-house following the protocol below.

Procedure:

  • Data Acquisition: Record EEG data from N≥15 participants. The paradigm should include:
    • Baseline Periods: 2 minutes of resting-state (eyes open, eyes closed).
    • Artifact Elicitation Tasks:
      • Ocular: Periodic eye blinks and horizontal/vertical saccades.
      • Motion: Head rotations, jaw clenching, swallowing, and walking on a treadmill.
      • HRI Task: A simple interaction task, such as using a BCI to guide a robot arm [15] or an emotion-elicitation task using standardized video clips.
  • Data Preprocessing: Apply a standardized minimal pre-processing chain to all data: high-pass filter (1 Hz), low-pass filter (50 Hz), and notch filter (50/60 Hz).
  • Ground Truth Establishment: For each data segment, establish a "clean" reference. This can be done via:
    • Expert Annotation: Trained experts manually identify and label artifact-contaminated segments.
    • Semi-Simulation: Artifacts are recorded separately and added to clean baseline EEG, allowing for perfect knowledge of the ground truth [37].
  • Algorithm Testing: Run each artifact removal algorithm on the standardized dataset.
  • Performance Quantification: Calculate the metrics from Table 2 for each algorithm. Crucially, also record the average processing time per epoch (e.g., per 1-second segment) on the specified computing platform.
  • Downstream Task Evaluation: Train a classifier (e.g., SVM or a compact deep learning model) to perform the HRI task (e.g., emotion classification [7] or motor imagery decoding [15]) using the cleaned data from each method. Compare the resulting classification accuracies.
Protocol for Real-Time Pipeline Validation

This protocol tests the integrated artifact removal and HRI control system in an online, closed-loop setting.

Objective: To validate the performance of a complete real-time BCI pipeline, including artifact removal, for a specific HRI task like robotic hand control [15].

Procedure:

  • Pipeline Integration: Implement the artifact removal algorithm and the decoding model (e.g., EEGNet [15]) within a real-time BCI software framework (e.g., Lab Streaming Layer, ROS).
  • Latency Budgeting: Define the total allowable latency from EEG signal acquisition to robot command execution (e.g., <200 ms). Allocate time to each processing step (filtering, artifact removal, feature extraction, classification).
  • Online Recruitment & Training: Recruit participants. Collect a small amount of calibration data (≈5 min) to tune the subject-specific model parameters or perform quick fine-tuning [15].
  • Closed-Loop Testing: Participants perform the HRI task (e.g., motor imagery to control a robotic hand [15]) using the real-time pipeline. The system provides continuous feedback.
  • Data Logging: Synchronously log the raw EEG, the cleaned EEG, the classifier outputs, the robot commands, and timestamps for every step.
  • Analysis: Calculate the online task performance (accuracy, completion time). Correlate the presence of artifacts (logged from the real-time detector) with task errors. Verify that the processing pipeline consistently operates within the defined latency budget.

Visualization of Workflows

The following diagrams illustrate the core workflows and relationships discussed in this document.

Real-Time EEG Processing Pipeline for HRI

This diagram outlines the complete data flow from signal acquisition to robot interaction, highlighting the critical role of artifact removal.

HRI_Pipeline RawEEG Raw EEG Signal ArtifactRemoval Real-Time Artifact Removal RawEEG->ArtifactRemoval CleanedEEG Cleaned EEG Signal ArtifactRemoval->CleanedEEG FeatExtraction Feature Extraction CleanedEEG->FeatExtraction Classification Intent/Emotion Classification FeatExtraction->Classification RobotAction Robot Action/Feedback Classification->RobotAction User User Response & New State RobotAction->User Closes the Loop Acquisition EEG Signal Acquisition Acquisition->RawEEG User->Acquisition Influences Brain State

Algorithm Selection and Validation Logic

This flowchart provides a logical framework for selecting and validating an artifact removal method based on HRI requirements.

Validation_Logic Start Start A Wearable System? Start->A B Real-Time Latency Requirement? A->B Yes E Use Deep Learning/ Complex Model A->E No (Lab System) D Sufficient Computational Resources? B->D Yes B->E No (Offline) C Artifact Type Known? G Use Generic Method (e.g., Stationary WF) C->G No H Use Specific Pipeline (e.g., EOG-focused) C->H Yes D->C Yes F Use Lightweight Method (e.g., RLS, ASR) D->F No End Validate on Target HRI Task E->End F->End G->End H->End

The Scientist's Toolkit

Table 3: Key Research Reagent Solutions for Real-Time EEG Artifact Removal

Item / Technique Function in Research Application Notes
Independent Component Analysis (ICA) [3] [67] Blind source separation to identify and remove artifact-related components. Powerful but computationally heavy; less effective with low-channel counts; often used as a benchmark.
Automatic Subspace Reconstruction (ASR) [31] Statistical method for removing high-variance, non-stationary artifacts in real-time. Suited for online use; works well with wearable EEG; requires parameter tuning.
Wavelet Transform [31] Time-frequency decomposition to isolate and remove artifacts in specific frequency bands. Effective for transient artifacts like eye blinks; offers a good balance of performance and speed.
Generative Adversarial Networks (GANs) [37] Deep learning model to generate artifact-free EEG signals from noisy inputs. State-of-the-art performance; requires large datasets for training and significant computational resources.
Mutual Information-based BSS [7] Blind source separation using mutual information with kernels (e.g., Epanechnikov) to maximize independence. Emerging method; shown to be effective with lower computational cost than some alternatives.
Recursive Least Squares (RLS) [37] Adaptive filtering technique for noise cancellation, often using a reference signal. Very low latency and computational cost; ideal for edge computing; requires a reference signal.

Conclusion

Real-time artifact removal is a foundational pillar for enabling robust and intuitive Human-Robot Interaction. This synthesis demonstrates that while established techniques like ICA and emerging methods like ASR and iCanClean provide powerful solutions, no single algorithm is universally superior. Success hinges on a carefully optimized pipeline tailored to specific HRI tasks, whether for affective computing in healthcare or industrial collaboration. Future progress will be driven by deep learning models adaptable to individual users, the strategic integration of auxiliary sensors, and a stronger focus on standardizing validation protocols for wearable systems. For biomedical research, these advancements promise more reliable brain-computer interfaces for therapeutic applications, paving the way for HRI systems that can genuinely understand and adapt to human cognitive and emotional states in real-time.

References