Algorithms for sparse representations
The objective is to identify practical algorithmic methodologies for calculating sparse approximations and develop, analyze and test prototypes of decomposition algorithms tailored for use within this project.
The goal is to develop the required mathematical tools for the analysis of the theoretical issues surrounding sparse representation models and dictionary learning and to understand how various dictionaries associated to (structured) sparse models can be learned under different settings.
Algorithms for dictionary learning
We will develop new tools for learning. Our main goal is to boost the use of dictionary learning, make it accessible and reliable, and demonstrate its usability in various domains. The concept of dictionary learning has been demonstrated with various applications and has provided encouraging results. However current approaches are limited by the computational complexity of general high dimensional matrix vector multiplication - O(n2) . To be competitive with the complexity of pre-defined fast transforms it is imperative that we develop new strategies for learning dictionaries that are capable of fast deployment - typically O(nlogn) complexity for matrix vector product with both the dictionary and its adjoint.
Evaluation & Demonstration Framework (SMALLbox)
To provide a comprehensive means of testing the algorithms developed in this project we will build an Evaluation & Demonstration Framework. This framework will be designed to fulfil three main goals:
The SMALLbox has been developed to achieve those goals. It provides an evaluation framework that enables easy prototyping, testing and benchmarking of sparse representation and dictionary learning algorithms. This is achieved through a set of test problems and an easy evaluation against state-of-the-art algorithms.
Incorporating Structure into Sparse Representations
A single linear over-complete dictionary cannot be expected to fully capture a signal's high-level coherent structure and it often ignores a wealth of additional information that is readily available to us (e.g. wavelet coefficient size decays down the wavelet tree; images are generally non-negative). The goal is to develop algorithms that can learn dictionaries that are constrained by additional structure, either associated with structure known to exist within the data considered (e.g. many time series are inherently shift invariant), or that is learned simultaneously with the dictionary atoms. A specific case that we will particularly focus on is that of multi-stream and multi-model data.
The goal is to distribute the knowledge acquired in the research project; to raise interest for the project during its development; and to facilitate the exchange of ideas and solutions with the signal and image processing community.