The broad learning system (BLS) paradigm has recently emerged as a computationally efficient approach to supervised learning. Its efficiency arises from a learning mechanism based on the method of least-squares. However, the need for storing and inverting large matrices can put t
...
The broad learning system (BLS) paradigm has recently emerged as a computationally efficient approach to supervised learning. Its efficiency arises from a learning mechanism based on the method of least-squares. However, the need for storing and inverting large matrices can put the efficiency of such mechanism at risk in big-data scenarios. In this work, we propose a new implementation of BLS in which the need for storing and inverting large matrices is avoided. The distinguishing features of the designed learning mechanism are as follows: 1) the training process can balance between efficient usage of memory and required iterations (hybrid recursive learning) and 2) retraining is avoided when the network is expanded (incremental learning). It is shown that, while the proposed framework is equivalent to the standard BLS in terms of trained network weights,much larger networks than the standard BLS can be smoothly trained by the proposed solution, projecting BLS toward the big-data frontier.
@en