INFORMATION ABOUT PROJECT,
SUPPORTED BY RUSSIAN SCIENCE FOUNDATION

The information is prepared on the basis of data from the information-analytical system RSF, informative part is represented in the author's edition. All rights belong to the authors, the use or reprinting of materials is permitted only with the prior consent of the authors.

 

COMMON PART


Project Number18-19-00593

Project titleFluent human-machine interaction based on expectation and intention markers: neurophysiological and neuroengineering foundations

Project LeadShishkin Sergei

AffiliationMoscow State University of Psychology and Education,

Implementation period 2018 - 2020 

Research area 09 - ENGINEERING SCIENCES, 05-106 - Neurobiology

Keywordshuman-machine interfaces, brain-computer interface, neural interfaces, brain-machine interfaces, intention, attention, gaze interaction, eye movements, eye tracking, ergonomics, EEG, MEG, eye-brain-computer interfaces


 

PROJECT CONTENT


Annotation
Identification and implementation of human intentions based on the analysis of brain signals is the core of the brain-computer interface (BCI) technology which is currently intensively developed. However, existing BCIs are not sensitive to the intention itself, but either to activation of the motor system associated with attempts to express the intention, or to neurophysiological phenomena accompanying additional mental actions mediating the expression of intention. Creating an interface that would as closely as possible identify correlates of the intention in neurophysiological signals is of particular interest, since such a new interface may have new unique properties. Verification of this assumption can be carried out only experimentally, after the creation of a sufficiently effective operating model of such an interface. This is what we intend to do in this project. The project will continue the research cycle that was previously initiated within the framework of the RSF project 14-28-00234 (2014-2016), where we identified the neurophysiological "marker" of arbitrary actions performed with the help of gaze dwells and developed a simple version of the hybrid eye-brain-computer interface (EBCI) controlled by combinations of gaze dwells and a positive response of the EEG classifier. In the new project, this EBCI will be significantly strengthened by using a combination of EEG and magnetoencephalogram (MEG) signals and a number of additional methodological improvements. Thus, the characteristics of IGMK will be brought to the level at which it will be possible to conduct experiments to test the possibility of qualitative enhancement of human-machine interaction. In addition to the novelty itself developed by the EBCI, many of the methodological solutions that lie at its base will also be novel. In particular, classification of synchronously registered MEG and EEG in real time will be realized for the first time, as well as control based on a combination of gaze and MEG. We also expect to obtain new fundamental data on the dynamics of intention formation.

Expected results
In the project, a fundamentally new type of human-machine interfaces will be developed and implemented as a working model. The new eye-brain-computer interface (EBCI) will significantly exceeds modern human-machine interfaces in its capabilities. It will provide computer control on the basis of short gaze dwells loaded with intention to act. Unlike in existing gaze communication technology, the presence of intention will be recognized "on the fly" by highly effective statistical classifiers sensitive to the markers of intention and expectation in the electroencephalogram (EEG) and magnetoencephalogram (MEG). It is assumed that when using the EBCI developed in the project, the operator will be able to give commands to the computer not only without physical but also without mental effort -- as easily and intuitively as possible, which will improve the efficiency of computer use in solving certain types of intellectual tasks. The experimental verification of this hypothesis is the main goal of developing a powerful EBCI. If the hypothesis is confirmed, it will be concluded that the future development of the EBCI of this type is promising, and their more practical (cheap, compact, convenient, etc.) implementations, including ones based on EEG only plus gaze control, could be created for a wide range of users. Since the EBCI will ensure the implementation of intentions without activation of the motor system and without creating additional cognitive loading, it can also be useful to patients and people with disabilities with various motor impairments. Most of the tasks to be solved in the project to develop EBCI will also have an independent fundamental and applied significance: we will search for new markers of intention and expectation in the EEG and MEG, develop new methods for classification of states with different readiness for action, develop high-performance detectors for expectation (and, possibly, also for intention) of the basis of EEG + MEG, develop a technique for operant conditioning of the expectation/intention marker(s) amplitude and intentions, obtain new data on the dynamics of intention formation and on the mechanisms of this process.


 

REPORTS


Annotation of the results obtained in 2020
Through the analysis of previously recorded MEG data, we identified oscillatory phenomena allowing us to discriminate spontaneous eye fixations from those fixations used for intentionally sending commands to the computer. Theta range desynchronization was the most pronounced effect we discovered. It was observed approx. 200-250 ms after the fixation started and increased until the feedback was given. This reaction was specific for both voluntary and involuntary fixations, though it was significantly stronger in voluntary fixations. The reaction was statistically significant in most sensors and was focused in the parieto-occipital area. The second most important reaction was the synchronization in the alpha range observed in the interval of -100…200 ms. It was evident on a cluster of sensors in the frontocentral region of the right hemisphere. The results of MEG classification after removing frequencies above 7 Hz did not differ significantly from the broadband signal (0…45 Hz). It remains unclear, whether the neural network classifier uses induced or evoked MEG components (on the previous stage we showed that the MEG components, phase-locked to fixations, that discriminate voluntary and spontaneous fixations in a statistically significant way, generally represent slow signal amplitude changes). Nevertheless, the results we acquired allow us to elaborate the MEG preprocessing in brain-computer interfaces (Vasilyev et al., in prep.). The deconvolution-based method of signal component separation, originally designed for the analysis of EEG registered on the background of natural eye movements (Dimigen and Ehinger, 2000), was adapted for the analysis of MEG registered during the gaze-based control. The method allowed us to dispose of components related to eye movements and confirmed the existence of a slow MEG component related to voluntary eye dwell, evolving from approx. 200 ms after the first fixation began. We demonstrated the similarity of forms and topography of responses emerging after the initial choice and its affirmation by gazing on the designated region of the screen (Vasilyev et al., in prep). Another method was developed to clarify the interpretations of weights in LF-CNN convolutional neural network (Zubarev et al., 2019), as it was necessary to examine which sensors and moments of time influence the classification more than others. Using this method, we confirmed that no artifacts of oculographic origin contributed to the classification of voluntary and spontaneous fixations using MEG data. We also showed the possibility of improving the accuracy of classifying voluntary and spontaneous eye fixations in a gaze control task via training the classifier on the data acquired from repeated MEG sessions conducted on different days. When the convolutional neural network LF-CNN (Zubarev et al., 2019) was applied, the accuracy indicator of ROC AUC classification in the group of 5 participants increased from 0.64 (training on data from the same day when the data for testing sample was collected) to 0.72 (training on 3-4 extra sessions). Since these results were achieved on a small group (5 participants; the classification improved in 4), they can be presented only as preliminary, and yet it appears that the MEG signal in fact bears more useful information for the eye-brain-computer interfaces (EBCI) than it could be judged by looking at the results of classification we got earlier. These results demonstrate that training the classifier on recordings of extra sessions conducted before the online experiments might pose advantages for the EBCI online experiments. A detailed comparison of eye movement-related activity during spontaneous and intentional eye dwells showed that in case of voluntary eye dwells a correcting saccade shifts the gaze to the center of the target object (to foveate), after which the microsaccadic movements get inhibited, while in spontaneous fixations no shifts to the center occurs and thus low amplitude saccades emerge regularly. We intend to use these results to improve the classification of voluntary and involuntary eye fixations. We designed a method allowing us to analyze the BOLD-signal related to eye moving “events” in close temporal proximity. By applying it to the data from the fMRI experiment where the participants played a game using gaze-based control inside the tomograph, we showed that faster reactions can be registered in just 2-3 seconds. The analysis of phase-amplitude dependencies demonstrated that the choice conforming eye fixation had been the source of the strongest fMRI signal (akin to results of MEG analysis) and identified the brain regions displaying activation in gaze-based control tasks. The BOLD-signals related to the gaze-based control significantly varied in phase (time of their emergence and peaking relative to the event) in different regions of the brain. The phases of the responses varied both upward and downward relative to canonical hemodynamic function with delays ranging from -3 to +2 seconds. The phases were homogenous and representative of distinct topographical clusters. We proved that topographical clusters were not random, and their high specificity in generation of responses with predetermined phase was distinct from the canonical hemodynamic function. We showed the high ergonomics of a simple (i.e. not requiring speech recognition technologies) voice confirmation of screen object selection committed by gaze when it was “congruent” with the context of the primary task performed by the user (Zhao et al., 2020). However, if the confirmation was carried out using a partially irrelevant word (“you”), the results worsened (significantly with respect to convenience scores (p=0.0008) and not statistically significant with respect to the selection time) when compared to confirmation by uttering the number of the object (the task presupposed a consequent selection of objects in numerical order and in reverse order). Since the completely congruent confirmation of gaze-based selections is almost impossible to provide, these results affirm the high prospects of utilizing passive brain-computer interfaces for confirmation (in this instance the issue of congruency does not exist, because no further actions of the user are required), i.e., the technology to which our project is devoted. On this stage of the project we also completed the preparation of several papers dedicated to the results we acquired on previous stages: Dubynin et al. (under review), Ovchinnikova et al. (under review), Zhao et al. (under review). We had to reschedule our project plans, since it became apparent that our progress towards higher accuracy of classification was being made too slowly to achieve the desirable level of classification by the end of 2020, meaning any new experiments would have been futile. The main reason for our slow advancement was rooted in insufficiency of our knowledge of the human brain mechanisms supporting gaze-based interaction and also about specifics of the eye movement-related activity itself in such interaction. Unfortunately, little work has been dedicated to this topic (the only study directly addressing these questions was by Shishkin et al. (2016), carried out as part of our previous project supported by Russian Science Foundation). It was because of that, and also due to serious restrictions for the experiments with human participants during the COVID-19 pandemics, that we had to postpone the preplanned experimental studies and instead immerse ourselves into in-depth studies of the data collected in previously made experiments, including the development of new methods for data processing. The results obtained thanks to this research opened up more possibilities for creation of highly effective classifiers of eye fixations, that might be utilized not only in the context of eye-brain-computer interfaces (EBCI), but in other human-machine interaction systems as well.

 

Publications

1. Ovchinnikova A.O., Vasilyev A.N., Zubarev I.P., Kozyrskiy B.L., Shishkin S.L. MEG-based detection of voluntary eye fixations used to control a computer Frontiers in Neuroscience, 15:619591 (year - 2021) https://doi.org/10.3389/fnins.2021.619591

2. Zhao D.G., Karikov N.D., Melnichuk E.V., Velichkovsky B.M., Shishkin S.L. Voice as a Mouse Click: Usability and Effectiveness of Simplified Hands-Free Gaze-Voice Selection Applied Sciences, 10(24), 8791 (12 pages) (year - 2020) https://doi.org/10.3390/app10248791

3. Zhao D.G., Vasilyev A.N., Kozyrskiy B.L., Melnichuk E.V., Isachenko A.V., Velichkovsky B.M., Shishkin S.L. A passive BCI for monitoring the intentionality of the gaze-based moving object selection Journal of Neural Engineering, 18:026001 (year - 2021) https://doi.org/10.1088/1741-2552/abda09

4. Dubynin I.A., Yashin A.S. Изучение субъективных оценок расстояния при совершении хватательного движения CAICS 2020: National Congress on Cognitive Research, Artificial Intelligence and Neuroinformatics, - (year - 2021)

5. Ovchinnikova A.O., Vasilyev A.N., Zubarev I.P., Kozyrskiy B.L., Shishkin S.L. Single-trial MEG classification for the detection of the intentional eye dwells CAICS 2020: National Congress on Cognitive Research, Artificial Intelligence and Neuroinformatics, - (year - 2021)

6. Ovchinnikova A.O., Vasilyev A.N., Zubarev I.P., Kozyrskiy B.L., Shishkin S.L. Detection of intentional eye fixations by convolutional neural networks applied to fixation-related magnetoencephalogram Brain-Computer Interface: Science and Practice (Samara, Russia), - (year - 2020)

7. S.L. Shishkin, A.N. Vasilyev, A.O. Ovchinnikova, I.P. Zubarev, A.V. Butorina, B.L. Kozyrskiy, Y.O. Nuzhdin, I.A. Dubynin, E.P. Svirin, A.E. Ossadtchi, B.M. Velichkovsky, T.A. Stroganova MEG features for fast detection of intentional eye-gaze dwells in an eye-brain-computer interface International BCI Meeting 2020, - (year - 2021)

8. S.L. Shishkin, A.N. Vasilyev, A.V. Butorina, I.A. Dubynin, E.P. Svirin, A.E. Ossadtchi, B.M. Velichkovsky, T.A. Stroganova Studying the magnetoencephalogram in players of a gaze-controlled game for the development of new human-machine interfaces CAICS 2020: National Congress on Cognitive Research, Artificial Intelligence and Neuroinformatics, - (year - 2021)

9. Shishkin S.L. Applying the EEG/MEG correlates of intention and anticipation for the enhancement of gaze-based interaction and building new neuroadaptive technologies Международный форум "COGNITIVE NEUROSCIENCE – 2020" (Екатеринбург), - (year - 2021)

10. Shishkin S.L. Consciousness sets limits to brain-computer interfaces Brain-Computer Interface: Science and Practice (Samara, Russia), - (year - 2020)


Annotation of the results obtained in 2018
In 2018, special attention was paid to the adaptation of deep learning methods for the EEG and MEG based automatic recognition of eye gaze fixations used to give commands to the computer. In particular, artificial neural networks with deep architecture, as well as methods for tuning their hyperparameters, were adapted to the tasks of the eye-brain-computer interface (EBCI) and tested on the data recorded in the EBCI paradigm (MEG and partially EEG). In the analysis involving EEG data recorded in the EBCI paradigm we obtained with EEGNet (Lawhern et el., 2016, 2018), after hyperparameter optimization, ROC AUC of 0.79 ± 0.08 (M ± SD) on a test sample that was not used in training or optimization of hyperparameters, a 16% increase compared to ROC AUC for the previously used sLDA classifier on the same sample (13 subjects). This result showed the presence, in the EEG within gaze fixations used for control, a much larger amount of information useful for the classification than previous results showed. Moreover, it became evident that a much larger number of “control” fixations can be successfully classified; before this result was obtained, it seemed to be possible that in a large portion of such fixations the "control markers" are completely absent. The result also showed the importance of in-depth optimization of neural network hyperparameters, so this procedure will be used in further work on the project. It is essential that the improvements were achieved even without the use of MEG, which, firstly, can be of great practical importance, and secondly, complements the possibilities of improving the classification accuracy through the joint use of MEG and EEG, and through operant conditioning. For the first time, the registration of MEG and EEG, synchronized with the registration of eye movements, was carried out during involvement of the participants (n = 23) into a game tasks that required computer control using eye gaze fixations. For the first time, differences in MEG were identified in the presaccadic time interval followed by gaze fixations used for intentionally giving a command to a computer, compared to spontaneous gaze fixations. The differences were most pronounced over the frontal cortex, both in the medial and dorsolateral regions of the left hemisphere. The significance of this result is that, based on it, it can be possible to significantly improve the accuracy of classification in the EBCI. Moreover, very early detection of intention (even before the beginning of the ”controlling” gaze fixation) might become possible, opening a way to create an especially fast EBCI. In addition, a number of other works were carried out, primarily of a methodological nature, preparing the launch of an EBCI based on MEG + EEG in a near real time (online) mode, and also aimed at improving its performance.

 

Publications

1. Isachenko A.V., Zhao D.G., Melnichuk E.V., Dubynin I.A., Velichkovsky B.M., Shishkin S.L. The pursuing gaze beats mouse in non-pop-out target selection Proc. of the 2018 IEEE Int. Conf. on Systems, Man, and Cybernetics (SMC2018), - (year - 2018)

2. Kozyrskiy B.L., Ovchinnikova A.O., Moskalenko A.D., Velichkovsky B.M., Shishkin S.L. Classification of the gaze fixations in the eye-brain-computer interface paradigm with a compact convolutional neural network Postproceedings of the 9th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2018, Volume 145, Pages 293-299 (year - 2018) https://doi.org/10.1016/j.procs.2018.11.062

3. Kozyrskiy B.L., Ovchinnikova A.O., Shishkin S.L. Estimating similarity between individual EEG datasets using a convolutional neural network Proc. of the 2018 IEEE Int. Conf. on Systems, Man, and Cybernetics (SMC2018), - (year - 2018)

4. Dubynin I.A., Yashin A.S., Shishkin S.L. Психофизиологические индикаторы чувства авторства действия Труды VIII Международной конференции по когнитивной науке (18-21 октября 2018 г., Светлогорск, Россия), С. 352-354 (year - 2018)

5. Kozyrskiy B.L., Ovchinnikova A.O., Shishkin S.L. Classifying short EEG epochs with a compact convolutional neural network Opera Medica et Physiologica, Vol. 4, Suppl. S1, p. 104 (year - 2018)

6. Moskalenko A.D., Kozyrsky B.L., Shishkin S.L. Space-time-frequency features and the convolutional-LSTM neural network for classifying EEG signals in an eye-brain-computer interface Opera Medica et Physiologica, Vol. 4, Suppl. S1, p. 101 (year - 2018)

7. Shishkin S.L., Dubynin I.A., Velichkovsky B.M. Consciousness and volition as obstacles and as goals in human-machine interaction Opera Medica et Physiologica, Vol. 4, Suppl. S1, pp. 108-109 (year - 2018)

8. Shishkin S.L., Dubynin I.A., Velichkovsky B.M. The importance of considering consciousness and volition in brain augmentation through brain-computer interfacing Frontiers Spotlight Conference «Limitless! Augmentation of Brain Function» (19-21 Sept. 2018, SwissTech Convention Center, Lausanne, Switzerland), p. 43 (year - 2018)

9. Zhao D.G., Isachenko A.V., Melnichuk E.V., Kozyrskiy B.L., Shishkin S.L. EEG potentials related to moving object selection with gaze: a possible basis for more flexible eye-brain-computer interfaces Opera Medica et Physiologica, Vol. 4, Suppl. S1, pp. 109-110 (year - 2018)


Annotation of the results obtained in 2019
In 2019, we obtained a number of fundamentally new results related to the differences of the evoked brain magnetic response between spontaneous eye gaze fixations and the fixations used for gaze interaction, i.e. for sending commands to a computer using gaze. The brain response synchronized with the fixation onset was most pronounced in [100,300] ms after the onset and was localized mainly in the left hemisphere of the cerebral cortex. The maximum effect affected (i) the frontal sensors located above the posterior sections of the frontal lobe, corresponding to the location of the frontal oculomotor fields; (ii) parietal-temporal sensors above the supramarginal gyrus, which belongs to the brain system of voluntary attention control; (iii) to a lesser extent, the marginal occipital-temporal sensors above the occipital-temporal gyrus, where the zones of the ventral visual flow are localized. Activation of the anterior oculomotor fields and ventral parietal zones of the voluntary attention system was previously observed (in fMRI) under conditions requiring arbitrary inhibition of gaze movement to an external stimulus (Matsuda et al. 2004; Tu et al. 2006; Ettinger et al, 2008). We have shown, for the first time, that activation of the same zones of the cortex occurs when the gaze is intentionally stopped to control in the absence of any external signal. Moreover, the characteristic “neural fingerprint” of control fixation appears in the evoked magnetic response 150-250 ms after its start, and this allows us to count on its use for tuning and training classifiers for fixations in the eye-brain-computer interfaces (EBCIs). Importantly, the above effects were not associated either with corrective fixational eye movements or with the planning of a subsequent saccade. More broadly, these results for the first time provide a key to the mechanisms of the brain involved in providing intentional control of the gaze in a situation where this control is fully voluntary and is not a response to visual stimuli or a modulation of such responses. The effectiveness of statistical analysis of MEG data was significantly enhanced by the development of a new method for working with such data based on mixed linear models using a moving reference interval (baseline). We also found that the initial idea of the EBCI could be modified in certain ways, that is indicated by the high classification accuracy obtained on the MEG data when extending the interval of data analysis in the region after feedback in response to a gaze fixation (the group average ROC AUC was 0.91) and when averaging the values at the output of the EEG classifier by consecutive pursuits of intentional or spontaneous objects (ROC AUC 0.83 when averaging the tracking of ten objects). These modifications are of limited use because they allow to determine the type of gaze behavior with delay. However, they may be of interest for developing neuro-adaptive technologies based on them, where they could likely be used to assess the level of feedback expectation. At the stage of 2019, the project also examined a wide range of possibilities for improving the accuracy of classification in EBCI by tuning, including automatically, hyperparameters of artificial neural networks for classifying EEG and MEG signals. The choice of methods made at the previous stage was confirmed and refined. A number of additional experimental studies were also carried out, which made it possible to clarify other issues related to the development of the EBCI technology.

 

Publications

1. Zhao D.G., Vasilyev A.N., Kozyrskiy B.L., Isachenko A.V., Melnichuk E.V., Velichkovsky B.M., Shishkin S.L. ЭЭГ во время выбора подвижных объектов прослеживающими движениями глаз Когнитивная наука в Москве: новые исследования. Материалы конференции 19 июня 2019 г. Под ред. Е.В. Печенковой, М.В. Фаликман. – М.: ООО «Буки Веди», ИППиП, 648-653 (year - 2019)

2. Zhao D.G., Vasilyev A.N., Kozyrskiy B.L., Isachenko A.V., Melnichuk E.V., Velichkovsky B.M., Shishkin S.L. An expectation-based EEG marker for the selection of moving objects with gaze Proc. 8th Graz Brain-Computer Interface Conference 2019. Verlag der Technischen Universität Graz, - (year - 2019) https://doi.org/10.3217/978-3-85125-682-6-53

3. Velikhov E.P., Kotov A.A., Lectorsky V.A., Velichkovsky B.M. Междисциплинарные исследования сознания: 30 лет спустя Вопросы философии, № 12. С. 5–17. (year - 2018) https://doi.org/10.31857/S004287440002578-0

4. Shishkin S.L. , Vasilyev A.N. , Nuzhdin Y.O. , Dubynin I.A. , Svirin E.P. , Butorina A.V. , Zhao D.G. , Kozyrskiy B.L. , Malakhov D.G. , Ushakov V.L. , Ossadtchi A.E. , Stroganova T.A. , Velichkovsky B.M. Contrasting gaze-based interaction vs. spontaneous gaze behavior: EEG, MEG and fMRI studies Journal of Eye Movement Research, Vol 12 No 7 P 155 (year - 2019) https://doi.org/10.16910/jemr.12.7.1

5. Zhao D.G., Vasilyev A.N., Kozyrskiy B.L., Melnichuk E.V., Isachenko A.V., Velichkovsky B.M., Shishkin S.L. An EEG marker of the intentional smooth pursuit in human-machine interaction Journal of Eye Movement Research, Vol. 12, No. 7, P. 395 (year - 2019) https://doi.org/10.16910/jemr.12.7.1

6. - Подумать только: компьютер научится считывать мысли человека Известия, 6 февраля 2019, 00:01 (year - )