INFORMATION ABOUT PROJECT,
SUPPORTED BY RUSSIAN SCIENCE FOUNDATION

The information is prepared on the basis of data from the information-analytical system RSF, informative part is represented in the author's edition. All rights belong to the authors, the use or reprinting of materials is permitted only with the prior consent of the authors.

 

COMMON PART


Project Number22-19-00528

Project titleA new generation of the Eye-Brain-Computer Interface: basic research and technical solutions

Project LeadShishkin Sergei

AffiliationMoscow State University of Psychology and Education,

Implementation period 2022 - 2024 

Research area 09 - ENGINEERING SCIENCES, 05-106 - Neurobiology

Keywordsbrain-computer interfaces, BCI, gaze-based control, gaze-based interaction, gaze interaction, Midas touch problem, eye tracking, human-machine interfaces, human-machine interaction, electroencephalography, magnetoencephalography, EEG, MEG, event-related potentials, fixations


 

PROJECT CONTENT


Annotation
The project focuses on creating a highly efficient computer control system based on eye movements ("gaze control") and brain signals, with the potential to become an assistive technology for paralyzed people and to extend computer interaction for healthy ones. Currently, non-invasive neural interfaces (brain-computer interfaces, BCI) significantly fall behind other means of interaction with computers in terms of accuracy, speed, and ergonomics. The most serious and currently insurmountable obstacle preventing the use of eye control is a so-called Midas touch problem, i.e., the inability to differentiate intentional and spontaneous (uncontrollable) gaze dwells. In our previous RSF-supported project, we tried to solve this problem by detecting intentional gaze dwells through the presence of an electroencephalographic (EEG) marker of feedback expectation. The marker was used by a special "passive" BCI that did not require any additional action from the user compared to standard gaze control. However, this solution proved to be insufficiently error-resistant: even a small amount of false positives could cause the user to expect interface triggering also in response to spontaneous gaze dwells. In this project, while retaining the idea of using passive BCI, we propose, instead of focusing on non-specific features, the use of only such "markers" that are directly related to the intentionality of gaze dwells. For this purpose, a number of studies will be performed aimed at clarifying specific features of intentional gaze dwells in the EEG, magnetoencephalography (MEG) data, and gaze microbehavior, as well as developing computational algorithms tuned specifically to such features. This approach to creating a hybrid eye-brain-computer interface (EBCI) – based on specific control features – will be implemented for the first time. Research on the microbehavior of gaze and the brain's work enabling intentional gaze dwells compared to other situations in which eye fixations become prolonged involuntarily – for example, when looking at small objects or waiting for some event to occur – will also be fundamentally new. This research may help not only to create an effective EBCI technology but also to develop an actual field of research on the mechanisms of voluntary action. Effective separation of specific and nonspecific brain activity will be facilitated by the first-time applied in EBCI studies high-density EEG and compact optically pumped magnetometers (OPMs), a new type of magnetic sensors capable of providing a record-breaking spatial resolution due to the fact that they can be placed directly on the head.

Expected results
The main result of the project will be the creation of a new, practically useful, and effective eye-brain-computer interface (EBCI) technology, in which eye control (gaze control) is significantly improved using a new non-invasive brain-computer interface (BCI). The development of a new generation of EBCI will require to complete, for the first time, a set of tasks, significantly exceeding the boundaries of this technology alone: (1) The study of gaze microbehavior and brain functioning related to intentional gaze commands – these questions, surprisingly, have been previously overlooked by researchers. It is expected to obtain important data on the features and specific markers of intentional human behavior in human-machine interaction and intentional actions in general, important both for human-machine interface engineering and for other fields of knowledge. (2) Development of a set of technical solutions providing improved efficiency of gaze control by taking into account the microbehavior of the gaze. They could allow for a significantly improved system of gaze control even without supplementing it with the use of brain signals – such technology could be most demanded by both disabled and healthy people due to its cheapness, compactness, and simplicity of use. 3) Development of a set of technical solutions ensuring the operation of a new kind of BCI (one of the components of the new EBCI) – specifically sensitive to intentional control by one's gaze. Currently, an extremely limited number of effective noninvasive BCIs is known, and expanding their range can open up new opportunities for the development of BCI technology. (4) Development of a set of improvements in gaze control and BCI technologies, such as context-aware decisions, data augmentation, utilization of a priori information, enabling metalearning for classifiers, use of compact optically pumped magnetometers (OPMs) that provides unique spatial resolution and signal-to-noise ratio, etc. (5) Testing of new interfaces by human participants will be aimed not only at the technical evaluation of their characteristics but also at identifying new possibilities, in particular, possible improvements in the efficiency of solving certain intellectual tasks. It is assumed that based on the results of the project it will be possible to create a highly effective assistive technology for paralyzed people and open fundamentally new possibilities of interaction with the computer for healthy people.


 

REPORTS


Annotation of the results obtained in 2022
In 2022, the main work on the project was focused on clarifying the fundamental facts, which will serve as the basis for the development of the eye-brain-computer interface technology on the later stages of the project, as well as on creating and debugging methodological tools for project work that will be performed in 2023 and 2024. 1. The method for detecting the components of the MEG signal, both phase-locked and non-phase-locked to gaze fixations, differing in amplitude between intentional (used for control) and spontaneous gaze dwells, has been refined. The technique makes it possible to effectively separate such components from artifacts associated with eye movements. A number of such components were described in detail, including the rhythmic component of the alpha-beta range with sources in the frontal oculomotor fields. An article has been prepared describing the methodology and the results. 2. In connection with the emergence of an efficient implementation of a well-interpreted neural network, SimpleNet (Petrosyan et al., 2022), which allows selective tuning to MEG and EEG signal sources with different topography and frequency spectra, it was used instead of the previously planned beamformers and GED. The network has been adapted to solve these problems in relation to our data. Preliminary results have been obtained, showing the possibility of separating single gaze delays associated and not associated with “eye control” by MEG after excluding the contribution of sources that are not specific to the “eye control” task. The average accuracy before removal of sources not specifically related to control was 0.78 and changed only slightly after removal, down to 0.74 (by 5%). 3. Meta-learning algorithms MAML and REPTILE were adapted to work with BCI/EBCI data. The possibility of reducing the amount of training data of a new BCI/EBCI user was shown when supplementing it with meta-learning algorithms. 4. To improve the efficiency of gaze-based computer control, gaze micro-behavior was studied in various scenarios, such as viewing objects on the screen, focusing attention on an object with the intention to issue a command and in the absence of such an intention. Differences in the micro-behavior of the gaze in these situations could made it possible to create a classifier that, with greater accuracy than existing algorithms, will recognize intentions to give a command when controlling a computer, since it will rely on physiological patterns rather than on prolonged dwell in a given area, as with the modern algorithms. To provoke long-term fixations with similar characteristics of gaze micro-behavior (a control condition to gaze-based control), a special experimental technique was developed. A number of informative features were identified, on the basis of which several classifiers were trained (linear discriminant analysis, support vector machine with a linear function and a radial basis function). These classifiers demonstrated similar accuracy of approximately 0.8. When training and applying the classifier in the same subject, accuracy increased to 0.85 with an SD below 0.05. 5. In order to provide the possibility of classifying gaze delays in a near real-time mode, the algorithms and programs of the basic experimental environment used in our work to simulate various types of gaze-based human-computer interaction have been significantly modernized and debugged in pilot experiments. 6. In a separate experimental study, a slow positive wave in the EEG was detected with a maximum in the parietal-occipital leads of the EEG, following the fixation potential (lambda wave), was found in gaze control and during close inspection of small graphic elements, but not during spontaneous gaze delays. 7. The method of using compact optically pumped magnetometers (OPM), including during co-registration with eye-tracker data and during eye control, has been developed. Techniques have been developed for fixing OPM in a helmet, which ensures their immovable placement in space in order to ensure accurate measurements under conditions with a residual field gradient.

 

Publications

1. Vasilyev A.N., Svirin E.P., Dubynin I.A., Stroganova T.A., Shishkin S.L. Cortical alpha/beta oscillations in voluntary prolonged eye fixations The 22nd International Conference on Biomagnetism, p. 513 (year - 2022)


Annotation of the results obtained in 2023
At the beginning of 2023, attempts were continued to improve the performance of brain-computer interface (BCI) classifiers, since existing non-invasive BCIs do not provide the accuracy and speed characteristics at which the BCI becomes useful for enhancing gaze-based interaction within the eye-brain-computer interface (EBCI), i.e., to achieve the main goal of our project: (1) In an in-depth study of the possibility of using meta-optimization in the transfer learning for BCI classifiers (between participants/users), for the first time, the possibility of improving classification (compared to transfer without meta-optimization) was shown for zero-shot learning, i.e., without additional training of the classifier on the data of a new user, which is critical for many BCI applications, especially aimed on patients with cognitive impairment (Berdyshev et al., 2023). (2) For the first time, Bayesian neural networks were created based on EEGNet and ShallowConvNet, which were proven to provide good performance in EEG classification, and on their basis, a method for detecting out-of-domain data (i.e., data significantly different from those present in the training set) for BCI classifiers (Chetkin et al., 2023 ). (3) An approach to classification in EBCI based on the method of joint decorrelation of EEG and MEG signals has been developed. A number of works were aimed at creating the possibility of effectively using a new type of sensors for signals of brain origin in IHMCs: (4) An original method for creating individual helmets for optically pumped magnetometers (OPMs) based on individual 3D head scans has been developed. Data were obtained in favor of the higher efficiency of OPM MEG in comparison with EEG and cryogenic MEG for the detection of beta rhythm synchronization during quasi-movements, a motor phenomenon that we plan (see below) to use as a replacement for the traditionally used motor imagery in EBCI. The study of the specific features of gaze behavior when gaze is used deliberately to select an object, which began in 2022, has been completed, as the basis of the gaze-based interaction technology, which, in turn, is the main component of the human-machine interface we are developing: (5) Previously obtained results on the specific characteristics of saccades and gaze fixations during gaze-based interaction in comparison with unintentional gaze dwells were clarified (Shevtsova et al., 2023). A series of experiments (20 participants) were carried out using a technique developed in 2022 for comparing intentional gaze delays during gaze-based interaction and other types of gaze dwells, including those associated with closely examining a small object. Using the data from these experiments, we described the features of gaze micro-behavior during its dwell in a local area in various scenarios. A theoretical study of the requirements for the characteristics of highly sensitive human-machine interfaces was carried out: (6) Based on an analysis of literature data, the “Uncanny Valley” hypothesis for human-machine interfaces was proposed (Yashin, 2023). According to this hypothesis, if the interface's response speed is particularly high, even a small number of errors will lead to the perception of the interface's response as foreign and its performance as unsuccessful. The main effort in the 2023 year project phase was aimed at creating, taking into account the results obtained in 2022 and 2023, prototypes of human-machine interfaces that are highly sensitive to user intent, in order to enable experimentation with them in 2024: (7) An approach, methodology and software have been developed for the BCI, in which the gaze performs the function of indicating a target, similar to using a computer mouse cursor, and the “click” is carried out by an attempt to make a movement, detected by the BCI component of the system. This approach is based on the hypothesis (Shishkin et al., 2023), put forward on the basis of an analysis of literature, about the higher compatibility with gaze-based control of attempts to make movements in comparison with the movement imagery: when attempting to make movements, there is no need to focus attention on such a clearly “internal” task, such as the representation of movements (it should be borne in mind that attention is one of the main drivers of involuntary eye movements). In the version of the technique for conducting experiments with the participation of healthy subjects, movement attempts will be represented by the quasi-movements, i.e., movement attempts, minimized to such an extent that both the movements and the response in the electromyogram (EMG) disappear, but the brain activation associated with them in time domain remains. Topographic pattern of this activation is similar to the topography of activation during real movements and during mental representation of movements. Instead of quasi-movements, real movements, including small-amplitude ones, can also be used (this is important both for assessing the possibilities of using the technology by patients with incomplete paralysis, and for monitoring the effects of the real movement suppression, which may be inherent to performing quasi-movements). (8) A model of the “eye control” interface has been created, in which intentional gaze dwells used to select an object on the screen can be recognized not only in the traditional way (using dwell time threshold), but also taking into account a number of characteristics of gaze micro-behavior. This recognition is performed by a statistical classifier (support vector machine) with an additional delay of about 30 ms, imperceptible to the user, that is, almost in real time. A method for taking into account the context of activity to determine the intentionality of a gaze delay has been developed, implemented, and fine-tuned, as well as a method for combining the result of its assessment with the data of a classifier that takes into account the micro-behavior of the gaze. An experimental technique has been developed that reproduces the main features of the real use of gaze-based interaction, in contrast to a previously published study (Isomoto et al., 2022), in which gaze-based interaction, enhanced by machine learning, was modeled incorrectly, using artificially indicated targets, which inevitably should have changed the micro-behavior of the gaze compared to normal gaze-based interaction. A two-day experiment was conducted with the participation of 16 naive participants who performed gaze interaction tasks using only a dwell time threshold of 500 ms (the proportion of correctly selected objects at the preliminary selection stage was 56.9 ± 10.3% on the first day and 65.0 ± 9.9% on the second day, the difference may be due to a modification of gaze behavior), as well as the use of a classifier. On the first day, a classifier was used that was trained on previously obtained group data, and on the second day, an individual classifier trained using data from the first day (the proportion of correctly selected objects at the preliminary selection stage on the first and second days was equal to, respectively, 87.1 ± 4.1% and 83.0 ± 7. 3%). Taking into account the fact that errors at the stage of preliminary selection of an object in eye control require significant attentional resources and contribute to fatigue, the results can be considered encouraging with regard to the prospects for developing technologies for enhancing gaze-based interaction using machine learning methods. In our project, the creation of such real-time technology provides the basis for modeling highly sensitive human-machine interaction at the final stage of the project.

 

Publications

1. Chetkin E.I., Shishkin S.L., Kozyrskiy B.L. Bayesian opportunities for brain–computer interfaces: enhancement of the existing classification algorithms and out-of-domain detection Algorithms, Vol. 16, Issue 9, Article ID 429 (year - 2023) https://doi.org/10.3390/a16090429

2. D. A. Berdyshev, A. M. Grachev, S. L. Shishkin and B. L. Kozyrskiy Meta-Optimization of Initial Weights for More Effective Few- and Zero-Shot Learning in BCI Classification 2023 IEEE Ural-Siberian Conference on Computational Technologies in Cognitive Science, Genomics and Biomedicine (CSGB), pp. 263-267 (year - 2023) https://doi.org/10.1109/CSGB60362.2023.10329624

3. Shevtsova Y.G., Vasilyev A.N., Shishkin S.L. Machine learning for gaze-based selection: performance assessment without explicit labeling International Conference on Human-Computer Interaction, LNCS, Vol. 14054, pp. 311–322 (year - 2023) https://doi.org/10.1007/978-3-031-48038-6_19

4. Yashin A.S. A Challenge for Bringing a BCI Closer to Motor Control: The “Interface Uncanny Valley” Hypothesis 2023 IEEE Ural-Siberian Conference on Computational Technologies in Cognitive Science, Genomics and Biomedicine (CSGB), pp. 242-247 (year - 2023) https://doi.org/10.1109/CSGB60362.2023.10329830

5. D.A. Berdyshev, A.M. Grachev, S.L. Shishkin, B.L. Kozyrskiy Using General-Purpose Meta-Learning Algorithms to Train a BCI Classifier on Less Data Proceedings of the 10th International BCI Meeting (June 6 – 9, 2023, Sonian Forest, Brussels, Belgium), Article ID 144557 (year - 2023)

6. S.L. Shishkin, A.S. Yashin, D.A. Berdyshev, A.N. Vasilyev Квазидвижения как возможная альтернатива воображению движений в нейроинтерфейсных исследованиях Сборник тезисов XXIV съезда физиологического общества им. И. П. Павлова, С. 305-306 (year - 2023)

7. S.L. Shishkin, D.A. Berdyshev, A.S. Yashin, A.Y. Zabolotniy, A.E. Ossadtchi, A.N. Vasilyev Quasi-Movements as a Model of Attempted Movements: An Alternative to Motor Imagery in Brain-Computer Interfaces Proceedings of the 10th International BCI Meeting (June 6 – 9, 2023, Sonian Forest, Brussels, Belgium), Article ID 144486 (year - 2023)