基于脑机接口的读心术的本质是神经解码.近期研究关注的脑功能区包括初级视觉区、布罗卡区(语言产生中枢)和威尔尼克区(听性语言理解中枢),然而使用了复杂的功能磁共振成像术和植入电极阵列.为了克服前述的仪器复杂性和植入局限性,我们发展了一种语言中枢解码方法,采用外挂式单通道近红外透射(额耳穴)传感器、示波器与计算机后处理,再现与分析语言中枢(布罗卡区)(默读汉字例如你、我或她、他时)与额耳穴的神经血管耦合联系特征.测量信号波长选自人体生命之窗内的875nm,其响应氧合血红蛋白的吸收率相对敏感,借助耳垂的动脉血氧饱和度高达约90%的原理背景,同时,测试技巧也涉及理解信号的典型延迟期.数据处理的结果表明,一是时域或频域内的汉字识别率可以高达70%(受试者4人),二是在时域特征内分别再现了三个汉字的声母特征(过渡的弱电位)与韵母特征(强直的梭形电位).本工作或可为获得型失语症患者提供利用耳朵说话的语言中枢控制的新的并行信道(相对于从脑到口的语言产生通道),特点是单通道且无损.
Mind reading through brain-computer interface(BCI) is in essence of neural decoding.Recent study cases concern functional areas of the cerebral cortex such as early visual region,Broca's area and Wernicke's area.However these studies typically used complex BCI(for example,fMRI) and implanted multi-channel BCI(for example,implanted electrode arrays).To overcome these limitations,here we develop a decoding method based on out-hanging one-probe Chinese auricular frontal-lobe-point's near infrared spectroscope(NIRS) that characterizes the relationship between silent reading in mind(meditated) isolated Chinese word and auricular frontal-lobe-point's mapping NIRS activity from Broca's area(called speech centre).This method is grasped in principles of auricle brain mapping(neurovascular-coupling links from Broca's area to Chinese auricular frontal-lobe-point called Antitragus Point 1,AT1),optical life window(active 875 nm wave-length at high absorption factor for OxyHb in about 90% oxygen saturation buried in tissues at AT1),and basic BCI test paradigm.We show that this compact BCI method makes it possible to identify silent reading(in mind) word SHE/HE in voice or YOU(),or I().Identification signatures extracted in(frequency-or) time-domain can touch 70% high level decoding positive recognition rate for 4-subject under the one-probe non-filter sensor kit and Agilent 54624A oscilloscope and PC(running Excel 546000 Toolbar and MATLAB 7.0).The time-domain signatures of three words lock in differences of main discharge series(tonic potentials as vowel-signs) linking low series(transition potentials as consonant-signs).Our results suggest that it may soon be possible to let acquired aphasia speak with ear by indirect noninvasive one-probe measurements of speech cortex activity under silent reading in mind.