# Workshop on Sparsity, Localization and Dictionary Learning

The European UNLocX and SMALL projects organized a public joint workshop on Sparsity, Localization and Dictionary Learning. The event was held at the Queen Mary University of London, on June 26 & 27, 2012.

**Slides and recordings of the talks are available here : ****materials of the workshop**

- Introduction
- Important Dates
- Programme
- List of delegates
- Invited speakers
- Hack session
- Posters

- Catering information
- Arrival in London
- Venue Information
- Getting around in London
- Hotels

- Contact Information

Sparse signal models centred around a learned sparsifying dictionary, have proved to yield flexible and efficient approaches that can tackle some modern applications, where scalability and efficiency are paramount. However, some complex applications for instance in the life sciences or audio signal processing still cannot be solved appropriately with existing algorithms. An alternative is to find signal representations that allow the derivation of ultra-efficient algorithms, based on localisation measures that allow the construction of minimising waveforms.

In this workshop, we present existing work on sparsity, dictionary leaning methods, and localisation. We consider how they address the need to develop algorithms that can meet modern challenges, and discuss future directions in this field.

Registration is now closed.

Workshop : 26 & 27 June 2012

**Slides and recordings of the talks are available here : ****materials of the workshop.**

Abstracts of the presentations can be found on the Invited Speakers section

**Day 1: Tuesday 26 June 2012**

09:30-10:00 Tea & Coffee - Registration

10:00-10:15 **Mark Plumbley –** Welcome and Opening Remarks

10:15-11:00 **Mike Davies** (University of Edinburgh)

**Compressible** **distributions**

11:00-11:30 Tea & Coffee

11:30-12:15 **Michael Elad** (Techinion)

**The**** ****Analysis**** Sparse Model - Definition, Pursuit, ****Dictionary**** Learning, and Beyond**

12:15-13:00 **Daniel** **Vainsencher **(Technion)

**Sample complexity of dictionary learning through \epsilon nets**

13:00-14:30 Lunch & Posters

14:30-15:15 **Remi**** Gribonval **(INRIA, Rennes)

** ****Sparse dictionary learning in the presence of noise & outliers**** **

15:15-15:45 Tea & Coffee

15:45-16:30 **Gabriel** **Peyré** ( CNRS and Université Paris-Dauphine)

**Robust Sparse Analysis Regularization**

16:30-17:00 Round up & Discussion

17:00 Retire to local pub, followed by dinner

**Day 2: Wednesday 27 June 2012**

09:30-10:00 Tea & Coffee

10:00-10:45 ** Nir**** Sochen** (Tel Aviv University)

**The time-frequency uncertainty principle is exceptional!**** **

10:45-11:30 **Bruno** **Torresani** (Université de Provence)

**Lp and entropic uncertainty inequalities**

11:30-12:00 Tea & Coffee

12:00-12:45 **Pierre Vandergheynst** (EPFL)

**On localization and uncertainty in graph based representations**

12:45-14:15 Lunch & Posters

14:15-15:00 **Anders Hansen** (University of Cambridge)

**Sampling and subsampling in an analog/infinite-dimensional framework**

15:00-16:00 **Discussion Panel**

16:00-16:45 Round up & Discussion

16:45 FINISH

The list of the participants attending the workshop is available here : Participants list

### Mike Davies (UEDIN)

Compressible distributions*

We consider the problem of compressed sensing when the signal is drawn from a statistical signal model and identify probability distributions whose independent and identically distributed (iid) realizations are compressible/incompressible, i.e., can/cannot be well approximated as sparse. We focus mainly on the context of Gaussian random sensing matrices and show that the prior distributions associated with the maximum a posteriori (MAP) interpretation of Lp sparse regularization estimators are in fact incompressible at some

undersampling ratio, in the limit of large problem sizes. To show this, we identify non-trivial undersampling regions in where the simple least squares solution almost surely outperforms an oracle sparse solution, when the data is generated from such a distribution. We then consider quantifying the limits of compressibility in terms of sample-distortion functions when the distribution's second moment is finite. We are able to bound the minimum achievable compressed sensing distortion irrespective of the sensing matrix and reconstruction algorithm. We conclude by applying these bounds to a simple multi-resolution

statistical image model

*This is joint work with R. Gribonval, V. Cehver and C. Guo

### Michael Elad (Technion)

The Analysis Sparse Model - Definition, Pursuit, Dictionary Learning, and Beyond*

The synthesis-based sparse representation model for signals has drawn a considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this talk we concentrate on an alternative, analysis-based model, where an analysis operator -- hereafter referred to as the "Analysis Dictionary" - multiplies the signal, leading to a sparse outcome.

While the two alternative models seem to be very close and similar, they are in fact very different. In this talk we define clearly the analysis model and describe how to generate signals from it. We discuss the pursuit denoising problem that seeks the zeros of the signal with respect to the analysis dictionary given noisy measurements. Finally, we explore ideas for learning the analysis dictionary from a set of signal examples. We demonstrate this model's effectiveness in several experiments, treating synthetic data and real images, showing a successful and meaningful recovery of the analysis dictionary.

*This is joint work with R. Rubinstein, T. Peleg, R. Gribonval, S. Nam (INRIA, Rennes), and M. Davies (UEDIN)

### Daniel Vainsencher (Technion)

Sample complexity of dictionary learning through \epsilon nets*

How many independent examples of signals are sufficient to learn a dictionary of p elements over signals of dimension n? how are these related to requirements on representation sparsity (in either l_0 or l_1 senses)? learning a dictionary for too small a sample may lead to overfitting, but at least some algorithms are quite sensitive to sample size.

We show that O(np*log(lambda)) examples suffice when l_1 norm of representations is bounded by lambda. In the case of representations with l_0 norm bounded by k, the geometry of the dictionary (through its Babel function) plays an important role. For classes of suitably incoherent dictionaries, we show O(np*log(k)) examples suffice. As a corollary, a simple (but exponential time) algorithm provably finds a dictionary that is globally near optimal for the distribution (not just the sample).

*This is joint work with Shie Mannor and Alfred M. Bruckstein

### Remi Gribonal (INRIA Rennes)

Sparse dictionary learning in the presence of noise & outliers*

A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a \emph{learned} dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio processing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. Considering a probabilistic model of sparse signals, we show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals.

Our study takes into account the case of over-complete dictionaries and noisy signals, thus extending previous work limited to noiseless settings and/or under-complete dictionaries.

The analysis we conduct is non-asymptotic and makes it possible to understand how the key quantities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations.

*This is joint work with R. Jenatton & F. Bach

### Gabriel Peyré ( CNRS and Université Paris-Dauphine)

Robust Sparse Analysis Regularization*

In this talk I will detail several key properties of L1-analysis regularization for the resolution of linear inverse problems [5]. With the notable exception of [1,3], most previous theoretical works consider sparse synthesis priors where the sparsity is measured as the norm of the coefficients that synthesize the signal in a given dictionary, see for instance [3,4]. In contrast, the more general analysis regularization minimizes the L1 norm of the correlations between the signal and the atoms in the dictionary. The corresponding variational problem includes several well-known regularizations such as the discrete total variation, the fused lasso and sparse correlation with translation invariant wavelets. I will first study the variations of the solution with respect to the observations and the regularization parameter, which enables the computation of the degrees of freedom estimator. I will then give a sufficient condition to ensure that a signal is the unique solution of the analysis regularization when there is no noise in the observations. The same criterion ensures the robustness of the sparse analysis solution to a small noise in the observations. Lastly I will define a stronger condition that ensures robustness to an arbitrary bounded noise. In the special case of synthesis regularization, our contributions recover already known results [2,4], that are hence generalized to the analysis setting. I will illustrate these theoritical results on practical examples to study the robustness of the total variation, fused lasso and translation invariant wavelets regularizations.

*This is joint work with S. Vaiter, C. Dossal, J. Fadili

### Pierre Vandergheynst (EPFL)

On localization and uncertainty in graph based representations

Graph theoretical Modeling of high dimensional datasets or signals is slowly emerging as a versatile tool, merging together elements of machine learning, signal processing but also geometrical insights. Much work remains to be done for understanding the fundamental limits of these models, though. In this talk, I will discuss the interplay between localization and uncertainty in some graph based representations with a particular emphasis on their role in graph based harmonic analysis.

Bruno Torresani (Université de Provence)

Lp and entropic uncertainty inequalities

The uncertainty principle is originally a quantum physics principle stating that some families of observable quantities cannot be measured simultaneously with infinite precision. The uncertainty principle can be turned into quantitative statements thanks to uncertainty inequalities, which provide bounds on precision of simultaneous measurements of such quantities. Uncertainty principles have enjoyed renewed interest in mathematical signal

processing with the advent of the sparsity concept; in this context, most uncertainty inequalities involve Lp concentration measures.

We shall present a general framework that yields uncertainty inequalities for analysis coefficients with respect to union of frames. In the simplest case this framework generalize the classical support inequalities to frames, and carries over to infinite dimensional settings. More generally, we obtain series of uncertainty inequalities involving entropies, which yield new Lp uncertainty inequalities.

### Nir Sochen (Tel Aviv University)

The time-frequency uncertainty principle is exceptional!*

We discuss several insights and possible generalizations of the uncertainty principle applied to different features via a group theoretic approach. We study also the relation to localization via the ambiguity function. We show that our intuition is misleading because we tend to rely on the time-frequency case which is the exception and not the rule.

* This is joint work with H.-G. Stark, R. Levie, D. Lantzberg, F. Lieb

### Anders Hansen (University of Cambridge)

Sampling and subsampling in an analog/infinite-dimensional framework

In many applications such as medical imaging, seismology and wireless communication the problem of reconstructing an image or a function from samples is a fundamentally analog or infinite-dimensional problem. This is most often due to the physics behind the sampling device. For example in Magnetic Resonance Imaging or X-ray Tomography the sampling is done via an integral operator (the Fourier or Radon transform respectively), and thus we are faced with a full infinite-dimensional recovery problem. We will in this talk discuss some new concepts in sampling theory that have recently emerged from analyzing the infinite-dimensional framework. In particular, we will focus on a new definition called "the stable sampling rate" and demonstrate how this is a fundamental issue in analog/infinite dimensional sampling. We will also discuss the phenomenon of asymptotic incoherence in compressed sensing and show how the combination of semi-random sampling and asymptotic incoherence will dramatically outperform the classical random sampling strategies. This happens especially for analog problems.

A hack session will be given on the SMALLbox, a Matlab toolbox that was developed as part of the SMALL EU project and which incorporates many of the algorithms developed during the past three years by the project members.

To attend the hack session __you will need to bring along a laptop__.

Here are a few reminders to help you get the most of the SMALLbox hack session:

1) We will not provide computers, so you should bring your own laptop if you want to play with the toolbox.

2) SMALLbox runs on Matlab. If you do not have a standalone Matlab license, you should check with your institution which VPN solutions they provide to grant you remote access to a Matlab license. If you can run Matlab from home, then it should work during the hack session too.

3) The installation of SMALLbox and all the linked toolboxes is completely automated but the download and compilation process can take some time, so you will get much more benefit from the hack session if you can complete the installation beforehand. More information on the installation can be found on the wiki:

https://code.soundsoftware.ac.uk/projects/smallbox/wiki/InstallationGuide

4) Feel free to bring the code of your own problems and sparse decomposition/dictionary learning algorithms if you want to integrate them to the toolbox.

The hack session will take place on Tuesday 26 June at 13:30-14:30 and on Wednesday 27 June at 13:15-14:15, both days in room 209 of the Law Building. You will need to register for the hack session at your arrival. We will meet at the registration desk 5 minutes before the hack session is scheduled to begin, and someone will show you to the correct location.

Poster boards will be available to display your posters in Rehearsal Room 2, just opposite the Arts Lecture Theatre where the talks will be held. This is also the venue for tea/coffee breaks and buffet lunch. Each poster will be on display throughout the two days.

**Posters requirements:**

- Each poster has to fit on a poster board that is 120 cm wide and 180 cm tall. However, posters should not reach down to the floor as this makes them hard to read. Posters should therefore be no larger than A0 portrait or A1 landscape.
- IMPORTANT
**:**Posters wider than the stated dimensions will not fit on the poster boards and cannot be displayed. A0 landscape is TOO WIDE. - Authors are responsible for putting up and taking down their own posters. Authors are encouraged to put up their posters at the beginning of the first day. You must then remove your poster by 17:00 on the second day.

**Posters abstracts:**

**G. Puy and P. Vandergheynst (EPFL)**

**Robust joint reconstruction of misaligned images using semi-parametric dictionaries**

We propose a method for signal reconstruction in semi-parametric dictionaries. The proposed algorithm estimates both the signal decomposition and the intrinsic parameters of the dictionary during the reconstruction process. Some results about the convergence of the algorithm are presented. The method is used here for the joint reconstruction of a set of misaligned images. Experiments show that the proposed algorithm accurately recovers the set of images and is robust to occlusions and misalignments. This method may have interests in, e.g., non-dynamic cardiac MR imaging where one has access to only subsampled images of the heart at different positions.

**S. Molla, B. Ricaud, G. Stempfel, B. Torresani** **(Université de Provence)**

**Gabor window optimization for audio signal feature separation**

A classical problem in audio signal analysis is the extraction of sound components from specific time-frequency patterns. This is often done by time-frequency masking (Wiener-filterng type approaches), after selection of the pattern of interest. However, the performance of the extraction depends crucially of the resolution of the time-frequency representation, which is controlled by the analyzing window, more precisely its time-frequency resolution. As is well known, the latter is constrained by uncertainty principles. Nevertheless, there is still freedom for window optimization.

Based on prior work by F. Jaillet, we develop a general framework for window adaptation, relying on the numerical optimization of some time-frequency sparsity measure; more precisely, we optimize some Lp norm (or entropy) of the time-frequency transform of a selected waveform. We present an iterative scheme, based on recursive Gabor multiplier diagonalization, and discuss preliminary numerical results on synthetic and real signals. We also investigate alternative iterative schemes.

**J.H. Kobarg, J. Oetjen, T. Alexandrov, P. Maass (University of Bremen)**

**Mathematical models for large MALDI imaging mass spectrometry data**

In the last decade, matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI-IMS) has become a useful bioanalytical technique. The mass spectra collected over a flat sample reveal the spatial distribution of hundreds of molecular compounds. By slicing a

sample into serial sections, measuring each individual section, and then merging all individual 2D MALDI-IMS datasets into one 3D data, one can perform 3D IMS analysis. 3D MALDI-IMS data is huge, reaching a million of spectra and 10 – 100 GB per dataset.

Within the scope of the UNLocX project, special computational methods adapted to the huge quantity of data are developed. The methods perform baseline removal and peak picking on the individual spectra, reducing the data to a few important coefficients. Spatial segmentation of the compressed dataset groups the spectra by their similarity. Having the grouped spectra allows one to search for molecular masses of interest.

**C. Guichaoua, R. Gribonval and N. Bertin (INRIA)**

**Dictionary learning for sparse audio declipping**

Audio declipping is a case of audio inpainting consisting in the reconstruction of saturated samples. In recent work, this problem has been approached as solving a linear inverse problem with additional constraints, using sparse modelling of the clean parts of the signal. One of the key points of sparse modelling is the dictionary used for the representation. We study the impact of the choice of the dictionary for this method of inpainting, focusing on the comparison between chosen dictionaries and learnt dictionaries. We also propose a variation of the Ksvd dictionary learning algorithm, adapted to learning on locally corrupted signals.

**B. Sturm (Aalborg University)**

**When "exact recovery'' is exact recovery in compressive sampling**

Work in compressed sensing (CS) has employed at least three definitions of "exact recovery'': 1) all true support elements have been found, and nothing more; 2) the normalized squared model error is less than \(\epsilon^2\); 3) the maximum absolute model error is less than \(\epsilon\). These parameters are usually made small, but its significance to recovery has not been analyzed. We show in this poster that \(\epsilon\) essentially provides a maximum allowed missed detection rate in the case of (2). Furthermore, we analyze the regime when these criterion are equivalent, and thus "exact recovery'' can really be called exact recovery.

**V. Abolghasemi and L. Gan (Brunel University) **

**Dictionary learning and joint sparse recovery for terahertz imaging**

In this work the problem of terahertz data reconstruction is addressed. We employ a dictionary learning method to obtain an appropriate sparse representation of 3-D data. Then an MMV-based image reconstruction with smoothness constraint is proposed. The proposed method exploits both spatial and temporal information while recovering the original data from an incomplete set of observations. We considered two types of data to evaluate the performance of the proposed approach; data acquired across a T-shape plastic sheet buried in a polythene pellet, and pharmaceutical tablet data (with low spatial resolution). The signal-to-noise-ratio, layer thickness evaluation, and chemical mapping analysis for the reconstructed data confirm the effectiveness and accuracy of the proposed method.

**X. Zhao, G. Zhou, W. Dai (Imperial College London)**

**A Novel Algorithm for Dictionary Learning: SimCO with a Weighted Objective Function**

We consider dictionary update stage in the dictionary learning problem. Many benchmark algorithms cannot guarantee the convergence to a global optimum. We show that the reason behind is the singularity of the objective function. To address this problem, we propose Weighted SimCO. Decompose the overall objective function as a summation of atomic functions. The key idea is to introduce multiplicative weighting coefficients to the atomic functions so that those close to singular points will be zeroed out. Numerical results demonstrate that the proposed method achieves significant improvements in both performance and speed.

**M. Yaghoobi (University of Edinburgh), S. Nam (INRIA), R. Gribonval (INRIA) and M. Davies (University of Edinburgh)**

**Cosparsifying Overcomplete Analysis Operator Learning**

We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the

cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and Douglas-Rachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse

signals.

**K. O'Hanlon M. Plumbley (QMUL)**

**Fast Subspace Matching Pursuit**

Group, or block. sparse representations extend the general sparsity framework by incorporating the added assumption that certain atoms tend to be active together. Greedy algorithms based on Orthogonal Matching Pursuit have been proposed which incorporate this added assumption.

The most well-known of these is the Block OMP (B-OMP). An earlier proposed algorithm is Subspace Matching Pursuit (SMP). We propose a fast variant of SMP, based on a Modified B-OMP using a sensing dictionary. This Fast-SMP is computationally similar to B-OMP and shows improved guarantees and performance compared to B-OMP for Gaussian dictionaries.

**Q. Liu and W. Wang (University of Surrey)**

**Bimodal Dictionary Learning for Model-Based Source Separation of Noisy Mixtures**

Time frequency (TF) masking is an effective method for source separation from convolutive mixtures. Robust estimation of TF mask from mixtures is however a practical challenge, especially when the mixtures are degraded by acoustic noise. Integrating audio and visual modalities is a recent approach for noise-robust TF mask estimation. Here we present a new method for TF mask estimation. We model the bimodal coherence based on audio-visual dictionary learning. A bimodal dictionary is learned from audio-visual data, and then used to create a visually constrained audio mask in a probablistic model based source separation algorithm. The TF mask is refined iteratively using an expectation maximisation (EM) algorithm. We demonstrate the peroformance of the proposed algorithm on the XM2VTS database for noise corrupted signals.

**I****. Ram, M. Elad, and I. Cohen (Technion****)**

**Generalized Tree-Based Wavelet Transform**

What if we take all the overlapping patches from a given image and organize them to create the shortest path by using their mutual distances? This suggests a reordering of the image pixels in a way that creates a maximal 1D regularity Could we repeat this process in several ’scales’ ? What could we do with such a construction? In this talk we consider a wider perspective of the above line of questions: We introduce a wavelet transform that is meant for data organized as a connected-graph or as a cloud of highdimensional points. The proposed transform constructs a tree that applies a 1D wavelet decomposition filters, coupled with a pre-reordering of the input, so as to best sparsify the given data. We adopt this transform to image processing tasks by considering the image as a graph, where every patch is a node, and vertices are obtained by Euclidean distances between corresponding patches. State of- the-art image denoising results are obtained with the proposed scheme.

**T. Arildsen, T. L. Jensen, K. Fyhn, P. Li, P. Pankiewicz, J. Pierzchlewski, H. Shen, R. Grigoryan, S. H. Jensen, T. Larsen (Aalborg University, Denmark)**

**Compressed Sensing in WirelessCommunication**

Compressed sensing has gained a lot of attention in recent years. While the famous Shannon-Nyquist boundary for sampling of analog signals provides a sufficient condition for the sampling frequency, it can be reduced substantially in situations where a signal is sparse in a given domain. This can lead to compressive sensing which can be formulated as an optimization problem where an under-determined system of linear equations is solved under the condition of a sparse solution.

In our research group at Aalborg University we are using compressed sensing theory to address some important practical challenges in radio frequency communication and more generally in analog-todigital conversion. Radio frequency receivers for modern telecommunication standards in power-constrained devices are increasingly challenged by the necessary sampling frequencies and hardware power consumption. Compressed sensing techniques may help address some of these challenges, for example by reducing the necessary sampling rate. This is a move towards the original, but hardware-wise unrealistic,

ideas of software defined radios.

In order to utilise compressed sensing, we must be able to represent the relevant signals sparsely – or, alternatively, find new signal structures that allow compressed sensing.

In practice, various hardware imperfections as well as noise and interference limit the applicability of compressed sensing. These issues must be addressed to make compressed sensing practicable.

**M. P. Romaniuk (Imperial College London), A. W. Rao (Imperial College London), R. Wolz (Imperial College London), J. V. Hajnal (King's College London) and D. Rueckert (Imperial College London)**

**Learning wavelet packet bases for compressed sensing of classes of images**

We propose an algorithm for optimising a wavelet packet basis with a cost function designed specifically for compressed sensing. We demonstrate the utility of this algorithm by training a basis on a set of brain MR images and establishing that it enables significantly more accurate approximations of unseen brain MR images than wavelets with the same levels of sparsity. Further experiments show that the learned basis can offer some improvement in compressed sensing reconstruction of unseen images. This work has been submitted to the MICCAI 2012 Workshop on Sparsity Techniques in Medical Imaging.

A buffet lunch will be provided on both days of the workshop, and a Banquet will be held on Tue. 26 June. Vegetarian options will also be provided.

The Banquet will take place at the Ecology Pavillion in Mile End Park, which is a short walk (about 10-15 minutes) away from the Workshop venue. From Queen Mary University of London, walk down Mile End Road towards Mile End tube station. Turn left into Grove Road and continue past the Arts Pavillion. The Ecology Pavillion is on your left hand side, shortly after the railway bridge.

Arrival in London

Looking for travelling information in London? Visit the Getting around London section

From: London Heathrow Airport (LHR)

Faster route: Heathrow Express to Paddington, then by Hammersmith & City Undergound Line towards Barker Street to Stepney Green Station.
Journey time: about one hour and 25 mins.

Cheaper route: Piccadilly Underground Line to Central London, change at Holborn onto the Central Line to Mile End.
Journey time: about 90 mins.

From: London Gatwick Airport (LGW)

Train (First Capital Connect, towards Bedford Rail Station) to Farringdon Underground Station, then take the Hammersmith & City Line to Stepney Green.
Journey time: about 70 mins.

Train (Southern or Gatwick Express) to Victoria, then by District line to Stepney Green. Journey time: about 80 mins.

From: London Stansted Airport (STN)

Train to Liverpool Street, then by Central Line to Mile End.
Time: about 70 mins.

From: London City Airport (LCY)

Public transport: Take the Docklands Light Railway (DLR) to Canning Town. Catch a Jubilee Underground Line to West Ham, then the District Line to Mile End.
Journey time: about 40 mins.

Taxi: Queen Mary is about 5 miles away from London City Airport. For a faster journey you may wish to consider getting a Taxi ("black cab"). The fare to Queen Mary will probably be about £20-£25.

From: London Luton Airport (LTN)

Shuttle bus to Luton Airport Parkway station, then First Capital Connect Rail (Thameslink route) to Farringdon. Change to Hammersmith & City Underground Line to Stepney Green or Whitechapel. If to Whitechapel, change to District Line for one stop to Stepney Green.
Journey time: about 90 mins.

From: St Pancreas International Station (Eurostar)

Hammersmith & City Line eastbound to Stepney Green.
Journey time: about 40 mins.

The Event will take place at the Arts Lecture Theatre, Queen Mary University of London, Mile End Road, London E1 4NS.

[ View larger map ]

The venue is easily accessible by public transport. It is within a five minute walk of both Mile End Underground station (Central, District, and Hammersmith & City lines) and Stepney Green Underground station (District, and Hammersmith & City lines).

The workshop will be held in building number 29.

For travel information, see [opens in new window]:

Plan Your Journey

The Transport for London; Journey Planner website provides detailed travel options between locations in London. Information includes how long it is likely to take, maps of the start, finish, and any interchanges, and even any stairs or escalators to expect along the way.

For travel to Queen Mary, Mile End Campus use:

Postcode: E1 4NS

The Tube

The simplest way of getting around in London is by Underground ("The Tube"). It is much cheaper to use an Oyster Card rather than paying by cash.

When travelling on London's public transport system, we recommend using an electronic Oyster card. This is cheaper and more convenient than conventional paper tickets bought with cash. Oyster cards can be purchased at most Tube stations. A refundable £3 deposit is usually required.

The Oyster card can be charged with money when you purchase it, or anytime afterwards. Each time you travel, the fare is deducted from the balance of the card. This is called pay-as-you-go. As long as you maintain a positive balance on the card, you can travel wherever you like on the Oyster system.

You can also buy a pre-pay Oyster card over the web before you arrive, which can be delivered directly to you before you travel to London. For more information on this see: Visit London: Oyster Cards (look for "International Visitors to London" at the bottom of the page).

You can use the Oyster card to pay travel fares on:

London Underground ("The Tube")

Buses

Docklands Light Railway ("DLR")

Oyster can also be used on tram services and many overland rail services within London (but some are excluded: check before you travel).

Suggested hotels:

- Ibis Hotel, London Stratford (about 30 mins from the venue)
- St Giles Hotel, Central London (about 35 mins from the venue)
- Hotel Ibis London City
- Days Hotel, Shoreditch

Maria Jafari

Centre for Digital Music

Queen Mary University of London

Mile End Road, London E1 4NS, UK

Tel: +44 (0)20 7882 7986

Fax: +44 (0)20 7882 7997