Patients with neuromuscular diseases that are unable to speak, but whose cognitive ability has been maintained, can be benefited from Brain Computer Interfaces (BCIs). The decoding of inner (covert) speech from EEGs consists of one of the state of the art methods that aim to tack
...
Patients with neuromuscular diseases that are unable to speak, but whose cognitive ability has been maintained, can be benefited from Brain Computer Interfaces (BCIs). The decoding of inner (covert) speech from EEGs consists of one of the state of the art methods that aim to tackle this issue. High variability between subjects, as well as low signal to noise ratio (SNR) undermine the methods used, and introduce the need for computer assisted solutions. Thus, machine learning models as well as large amounts of recorded data are required to design effective algorithms and produce substantial results. In this study, covert vowel classification was performed in a systematic way, by making use of two openly shared databases from literature; the Coretto database, that contains EEG recordings of native Spanish speakers, and the DAIS dataset, which includes EEG recordings of native Dutch speakers. Six classifiers were initially selected to perform 5-class classification: a Random Forest (RF), a k Nearest Neighbours (kNN), a Gaussian Naive Bayers (GNB), a Deep Convolutional Neural Network (DCNN), a Shallow Convolutional Neural Network (SCNN) and a Long Short Term Memory Recurrent Neural Network (LSTM). The DCNN outperformed the other methods, with average intra-subject accuracies of 35% for Coretto and 39% for DAIS (chance level 20%). Afterwards, an Overt versus Covert trials experiment was implemented, to test the limits of overt speech decoding from EEGs. The overt result was slightly higher than covert, with an intra-subject average value of 37.8% for Coretto and 40.5% for DAIS (chance level 20%). Finally, binary classification was performed to identify those pairs of vowels that can be classified more efficiently. Vowels /a/ and /u/ seemed to perform better in average in both datasets (average of 64.8% for Coretto and 64.4% for DAIS with a chance accuracy of 50%). Future work should focus on identifying the useful parts of the EEG recordings, increasing the SNR and the resolution of the electrodes, and defining the most appropriate dictionaries of words/vowels for a BCI. Also, more studies should follow systematic ways of comparisons between datasets, to obtain less ambiguous insights and lead this field to improvements.