[MATLAB] Gram-Schmidt routine thoroughly explained

ac76bf692c534ecf134dc887c232a2f0

In Digital Signal Processing (DSP) one should often work or encounter the notion of vector and function spaces and finally orthonormal basis. The latter is one of those small building blocks which makes so many devices working and kicking.

The goal of this article is to publish MATLAB source code of the Gram-Schmidt algorithm. Given a set of linearly independent vectors, the algorithm returns a set of orthonormal basis. It is said that this basis spans the vector space. In other words, any vector from this vector space can be uniquely expressed using those basis vectors.

This algorithm enforces the so-called orthogonality principle. This principle is one of corner stones of the modern DSP.  It’s role can be hardly overestimated. For example, without it FFT wouldn’t work! And without FFT… we would be somewhere in the ages before 19th century…

The shared algorithm is the classic one. A word of caution here – the classic algorithm is numerically unstable. One should implement the modified version, which requires only a small modification.

The source code can be downloaded [HERE]. I tried to comment it heavily so that it is maximally clear what is happening at each step.

In applications where the input matrix is large and requires numeric stability, and speed, one should instead use the optimized QR decomposition routines [wiki]. The QR decomposition returns the Q matrix (exactly a matrix of orthonormal basis as for the Gram-Schmid algorithm) + an upper triangular matrix with dot products (basis i with input vector j)…


Let me share a small gem. Suppose you are give 4 out of 5 orthonormal vectors that span 5 dimensional vector space. You would like to find the 5th missing orthonormal vector. With this routine you can find it easily the following way:

  1. Initialize the input matrix with zeros
  2. Put in the first 4 columns the orthonormal vectors you have
  3. In the last column put just random values
  4. Run the routine

In the output you should get a matrix with the first 4 columns identical to those 4 given orthonormal vectors, while the last 5th column will be the missing orthonormal vector – the one you were looking for. Actually you can have several missing orthonormal vectors and the algorithm will still work.

Posted in Uncategorized | Tagged , , , , , , , , , | 1 Comment

[MATLAB MEX] Complementary filter for AHRS

13560518315362214

Small, lightweight and cheap Inertial Measurement Units (IMU) made their way into many portable electronic devices. Especially those involving physical movements and rotations. There are plenty of applications starting from toy airplanes to serious Inertial Navigation Systems (INS).

What exactly IMU devices are measuring? What useful information they provide? The IMUs typically comprise 3 devices – accelerometer, gyroscope and magnetometer. All three  typically provide 3 values at time (axis X, Y and Z). These measures are not enough to tell how our plane is oriented. Good starting point about necessary maths can be found [HERE].

Provide exhaustive explanation of orientations and accompanying linear algebra is out of scope of this article. A reader should understand only few things: orientation of our plane is always with respect to some fixed reference (very commonly Earth), orientation of our plane can be understood as transformation from the fixed reference to the current orientation of the local sensor frame. This transformation is usually encoded using a 3×3 rotation matrix or more conveniently using a quaternion.

The Madgwick’s complementary filter algorithm [PDF] for quaternion computation from IMU calibrated data is widely used. Author provides MATLAB and lightweight, 3rd party library-free C code. The latter can be run on an embedded device with limited computing power and memory storage.

For prototyping needs and IMU data analysis it is better to use native MATLAB code. The downside is that for large corpuses of IMU data, the code is very slow (compared to C version). I developed a MEX wrapper function around the existing code with very few modifications to the code (e.g. avoiding approximation of inverse square root).

This is the AHRS version of the Madgwick’s algorithm [WWW] which uses the magnetometer sensor. Therefore it is advised to keep sensor away from large metallic objects and let sensor still for at least few seconds upon start of recording.

The algorithm has a single parameter beta which controls the amount of tradeoff between information accelerometer+gyroscope and magnetometer. This value can change from sensor to sensor. You can set large beta (say 3-8 for first few seconds) and then set it small for runtime (say 0.01-0.1). This is makes the filter converge much faster to its true state (sensor frame orientation with respect to fixed frame of reference). Failing to do so will make filter spit unreliable quaternions.

The source code and a compiled MEX file for 64bit Mac OS X (Mavericks), Matlab R2013a can be downloaded [HERE].

There is *lot* to say about the algorithm and practical side. I encourage reader to investigate this exciting domain of signal processing.

Posted in Uncategorized | Tagged , , , , | Leave a comment

[MATLAB] Local Correlation Tracking in Time Series

 

MG-MD-Loco_corr_scoresThe correlation is an important notion in many fields of research or applied science. There is plethora of practical processes that literally spews time-series, and often many at once. It gets more interesting (and more complicated) when we need to process multiple time-series.

Let a process produce two time-series that reflects its internal state. Suppose that the process runs normally if there is some relation between two time-series at any given moment. If this relation is broken at some instant, or for some short period of time, we can say that there is some fault (interpretation can vary depending on application).

This generic and simple example applies to many practical processes. In essence we track local correlations that should be close to 1 if the system runs normally, and lower value if this symmetry is broken.

In literature one can find many and sophisticated algorithms that performs correlation tracking. In this article we will present shortly a simple, yet effective, algorithm which robustly estimates correlation between two time-series. It can detect correlation in the presence of noise (to some level) and can track non-linear relationships.

The algorithm presented in [PDF] is based on local auto-covariance matrix comparison. The comparison is done via spectral decomposition of local auto-covariance matrices using Singular Valued Decomposition (SVD). The algorithm output is data-dependent and does not require many obscure and hard to tune parameters. It can also perform in online setup – processing the data as it arrives.

The algorithm has been implemented in MATLAB in two version – batch and online. The online version demonstrates the capability to track changes and reflect them in the local correlation score (needs to be adapted for online incoming data). We implemented exponential weighting of past time windows!

You can download the code [HERE].

Posted in Uncategorized | Tagged , , , , , | Leave a comment

MATLAB : Output custom paper size figures from command-line

Matlab is great for rapid prototyping and for plotting nice figure. It is less nice when you start to demand specific things. One such thing is plotting and saving nice figure using custom size paper, position etc.

Imagine you want to obtain that nice vectorized figure saved as PDF but… on custom paper size. You don’t want to see that nasty white areas around your figure or your figure resized to fill A4 or Letter page. So, what is the solution if you want to automatise the process?

After some time messing with documentation, source code, different toolboxes (asking even more dependencies), I came up with a simple solution. All you need to know is that you can set all properties for your figure from command-line (or in the script). All the magic appears through the use of the “set” command.

Let’s assume that you just plotted a figure.

h = figure;

plot( x, y );

And you want to set it of certain size and position on the screen. First adjust the size and position of your sample figure and then get the ‘Position’ property numbers:

ps = get( h, ‘Position’ );

and then reuse it for each figure you would like to apply to:

set( gcf, ‘Position’, ps );

Now you may want to set the paper size so that you focus on your figure, with as less as possible white area around your plot. By default it’s either A4 or Legal format. If you don’t change it, your figure will be saved but will fill the whole page. This is awful for most cases, I agree. You can get the values for your adjusted test figure and set the paper size for all other figures as follows:

pps = get( h, ‘PaperSize’ );

set( h, ‘PaperSize’, pps );

“How do you adjust the paper size to match my figure size”, I hear you asking. Well, you can it in File -> Print Preview dialog of your current test figure. There you can set Width and Height of your paper and see how your figure fits there. Then you note the magic numbers and reuse them as indicated above.

Finally comes the important trick. If you don’t use it, your figure will not be in the area of your custom size paper. You will see some part of figure and plenty of white space. You need to tell MATLAB to put the figure in your new area and fill it. Do this:

set( h, ‘PaperPositionMode’, ‘auto’ );

This is it. Now you can use the standard “saveas” function and get your figure in your preferred format.

saveas( h, ‘nice_plot.pdf’ );

This is it folks. Let me know if something is not clear.

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Parallel AutoClass for Unsupervised Bayesian Clustering

1-s2.0-S1055790312000942-gr2

We humans are much more efficient to analyze a complex phenomena if it is well structured and organized. Say we are given 1M points and asked to do some analysis. We can inspect each point separately (very daunting task) but we can also inspect 10 point groups or clusters. Finding such clusters is important nowadays since we are literally drowning in data! In machine learning community this sort of problem is called clustering and is often addressed in unsupervised [COURSE] manner i.e. no point labels are given. Only the data.

One smart fella may tell you that using the standard k-means algorithm [WIKI] he will get the problem solved. What he don’t realize is that he needs also to specify the number of clusters which is not known a priori. This is a very important parameter if we want to inspect our data based on natural clusters.

Another concern is data dimensionality which is much less evident for our smart fella. If his data has more hundreds, thousands or even more dimensions, without knowing he may be cursed – curse of dimensionality [WIKI]! In short, distances in such high-dimensional spaces loose their meaning and it is difficult to tell which point is closer than another!

One of solutions is to use unsupervised Bayesian methods modeling the data as finite mixtures and working on each dimension independently. Of course, this independence assumption does not hold in general for all possible data but this is a very viable hypothesis in data exploration.

The AutoClass [WWW] developed at NASA Ames Research Center [WWW] is a solution to the problem. Given vectorial data it can deal with

  • real-valued and discrete data;
  • natural data clusters;
  • processing is roughly linear in the number of data points;
  • returns probabilistic membership for each data point;
  • allows feature correlation within each cluster data points;
  • works with missing data.

The algorithm finds the classes that are most probable with respect to the data and the model. In the case of real-valuead data the model uses mixture of Gaussians, while with discrete data – mixture of Bernoulli distributions. It has been successfully used as data exploration tool to discover new classes of infra-red stars during NASA IRAS project [WWW], as well as to discover new classes of proteins in DNA sequencing applications. In my research I used it as a data exploration tool with some attempts to learn better visual features from document images [DOCEXPLORE].

The software is distributed as a source tarball and can be compiled pretty easily on any major operating system. The problem is that the software was written back in the wild 90’s with only single processor computation capabilities. As it will be clear from the very first run, the computations are pretty heavy even on modern processors and becomes even more difficult on large datasets. So, the goal of this blog post is to share parallelized version of the AutoClass algorithm. The very first search with Google “parallel autoclass” will show you an academic publication [IEEE] explaining how the algorithm was modified to run on distributed computation system. It is embarrassing that they did not share the code of their P-Autoclass! To my best knowledge, nobody else attempted to give second chance to this forgotten but useful data exploration tool!

Nevertheless, thanks to their paper it was straightforward to identify the code blocks with most intensive computation and luckily – the easiest to parallelize. Relatively easy. I used the industry standard OpenMP framework [WWW] to parallelize the for loops under the question. The catch was to watchout for shared and private variables in the loop, as well as to deal with few reductions…

Ok, ok! No more technical implementation details, [HERE] is the source tarball. Just run “make” from the console and within few moments you will get the “autoclass” binary. I used the Intel C compiler so you may want to comment out the CC variable and enable the standard GNU C compiler.

The first barrier would be to prepare the data files. The format of data files is explained well in the accompanying documentation to the original AutoClass distribution [TAR]. If you use MATLAB, [THIS] toolbox simplifies the task greatly and besides allows you to call the AutoClass binary and get results back to the environment.

Final word of warning: Do not expect the software to be a magic black box spitting out the best data clusters the way you want or is expected in your application! You need at least basic understanding of the algorithm and be an expert (you can become one) on the data you work on! Make sure you understand how your data is generated, what are the properties, try different non-destructive data transformations and then you can hope the tool to be useful for you.

Credits go of course to NASA Ames Research Center [WWW] and to the brave fellas who developed it there. Thanks also to Michael Lewicki for writing the Matlab toolbox interfacing with the binary.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Keeping your Raspberry Pi alive: enabling Hardware Watchdog under Arch Linux

 

images-1

Raspberry Pi [WWW] is a nice platform for implementing nice and potentially complex & useful systems. Once prototyping and testing is over, all you want now is that thing “just works”. Yes, we want to avoid reseting the device manually too! Unfortunately, complex electronics like Raspberry Pi might get stuck either in software or at hardware level. So, the question is: can we reset the device when somehow it gets stuck and shows no life?

The solution is to use BCM2708 chip’s hardware watchdog [PDF]. It will poll the device at regular intervals and will do a reset for us. Assuming that you are running Linux based distribution adapted for the Raspberry Pi device, this is going to be a 2-step dance in terminal.

The watchdog can be enabled as we load the kernel module:

sudo modprobe watchdog

The module is not loaded by default with each boot of the system. To be sure it gets loaded every time as we start our Raspberry Pi, create a file named /etc/modules-load.t/bcm2708-wdog.conf with following contents

bcm2708_wdog

Next, we need to enable the watchdog service which will poll the device at regular intervals. First you enable the service (will persist upon reboots)

sudo systemctl enable watchdog.service

Second, you just start the service in familiar fashion by issuing the command

sudo systemctl start watchdog.service

But, before you start the service, you may need to look at the watchdog daemon configuration file /etc/watchdog.conf. If you loaded the module and enabled the service, the file should be already there with some meaningful defaults. I recommend you to skim through the corresponding man page to get idea of defaults and possibilities that are at your fingertips

man watchdog.conf

Credits go to [WWW] and [WWW] with instruction how to do the same in Debian.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Raspberry Pi: Configuring Avahi on Arch Linux

500px-Avahi-logo.svg

Manual network configuration on local network is a fun thing unless you need to get things working right & quick. By this I mean assigning IP addresses manually, remembering and typing the numbers by hand and so on. You understood.

Often, you want to know what services are available on the local network and how to connect to them. A simple example is a LAN printer or a File Server. All you want is simply to have the device name popup in your Explorer window or any other application-specific program that connects to any of such services.

In this blog entry, without pretending to be complete or even accurate, I will share my experience how to render services visible running on a Linux PC using Avahi [WWW] [WWW]. Similar capabilities in Mac OS X world is seamlessly provided by Bonjour [WWW]. We will focus on Avahi configuration in Arch Linux, keeping in mind that with minor (and may be even simpler steps) the same can be accomplished on any other Linux distribution. Those and similar solutions are implementing what is called Zero configuration networking [WWW]. Why not to roll out a open-source solution featuring local service discovery for all your devices in your home network at home?

With all this preliminary context in place, the need for Avahi services arose when I started to mess with Raspberry Pi [WWW] device. As I am doing all activities on the device using SSH, it become quickly evident that static IP address assigned to the network card manually is not the most elegant solution, especially when moving from network to network (e.g. showing masterpieces off to friends).

I am assuming that network is configured and a SSH daemon is configured and running on the device. What I want is to be able connect the device using hostname without messing with /etc/hosts file on my Mac or any other machine acting as a SSH client.

What you need is to install several packages (run the commands as root or using sudo prefix):

pacman -S avahi nss-msdn

pacman -S dbus pygtk python-dbus python2-dbus

Now you want to enable the Avahi daemon to persist after reboot like this:

systemctl enable avahi-daemon.service

To start it right away, run the commands (yes, DBus must be running before enabling Avahi):

systemctl start dbus.service

systemctl start avahi-daemon.service

If all is OK, you can check the running processes using your favorite top-like utility to see if dbus-daemon and avahi-daemon are running.

Now we need to do a small correction of the /etc/nsswitch.conf to reflect how hostname resolution is taking place. Add or modify existing line starting with “hosts: “:

hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4

Reboot the device to double check if the services are up and running.

When all is done, now comes the tasty part. Check output of following commands (even from another machine with Avahi installed):

  • avahi-browse –all
  • avahi-discover
  • avahi-resolve -n some_hostname.local

You should see some familiar hostnames that are advertising themselves to the local network and telling what services they are offering. You can see their IP addresses, MAC, service names etc.

Suppose our host has been called “mountain”, it runs a SSH server and has Avahi configured and running. Now you could easily SSH into that box from any other machine on the LAN like this:

ssh user@mountain.local

This said and coming back to my Raspberry Pi hacking, it suffices to enable DHCP for Ethernet port to be able connect to the device in any other network. Of course, it is assumed that SSH client and the Raspberry Pi are both connected on the same LAN segment…

Credits go to [Arch-WIKI].

Posted in Uncategorized | Tagged , , , , | Leave a comment

Extreme Learning Machine MATLAB MEX implementation

untitled

In real world we have plenty of data sources (e.g. finances, bioinformatics, environment monitoring, multimedia etc). Often we want to leverage this data and to predict what event will occur, which strategy to choose. Simply put – what are those measurements hiding from us? Can we predict based on the observed data?

The Machine Learning (ML) [WIKI] field from Computer Science answers just this question under very different viewpoints and assumptions. In ML jargon we want to find a classifier or predictor (statistic parlance) which will serve as a black box spitting out predictions on data that we want to classify & categorize. The problem is that the box is not that black – it should be built first.

There is plethora of different methods to learn a classifier from given data. Ranging from simplest & dumbest to very sophisticated & complex. Not trying to be exhaustive, one could mention Nearest Neighbor, Naive Bayes, Logistic Regression, Support Vector Machines, Neural Networks, Random Forests and many others.

In the same real world, we are often given either very few data or really huge datasets. That’s how the world works, folks. In this blog entry we will speak about recent advances in Machine Learning that attempts to answer the second challenge – learning & classification from very large datasets.

I implemented the Extreme Learning Machines classifier during the Raspberry Pi [WWW] coding event at University of Rouen [WWW] due to its low power and memory requirements. The method uses Neural Networks approach but does not require very expensive and time demanding Back-Forward learning and similar iterative techniques as found in classic literature. It implements single hidden layer feed forward neural (SLFN) network with multiple theoretical and practical underpinnings:

  • scales to large and even very large datasets;
  • very few easy to tune parameters;
  • can be cast naturally in classification or regression contexts;
  • very fast training & classification times;
  • can model highly nonlinear data with kernels;
  • generalizes the LS-SVM and P-SVM algorithms.

The first main point is that learning such SLFN is possible without iterative tuning! In last decades, gradient descent, Back Propagation and Least Square solution to RBF network have been very popular and closely studied. They have nice properties but suffer from multiple problems:

  • learning algorithms are seemingly different with no clear connections;
  • specific parameters that need an expert to tune it;
  • prone to overfitting;
  • can be trapped in local minima/maxima;
  • often very computationally expensive.

The second main point is that hidden node parameters need not to be tuned by a learning algorithm! This is backed by both theoretical and practical evaluation! It has been shown on multiple public datasets that the ELM algorithm provides better or comparable performance than LS-SVM at much lower computational cost!

For more technical information on inner workings and evaluations, see [WWW] with PDFs and slides. This code implements the regularized version of the ELM algorithm from [PDF].

Now back to the main reason: we want a code that runs on my data and returns quickly the trained model and is as fast to classify any future data. I started from publicly available MATLAB implementation [HERE] and tidied it up into two clean and small functions. Further, I developed in C++ a library that can be seamlessly integrated into any project. In my case, I interfaced the code with MATLAB using MEX interface.

You can download the pure MATLAB code and library with pre-compiled MEX files [HERE]. The code consists of two parts – training and prediction functions. Library is provided in the form of template code that needs to be simply included in the source code that will use it.

We have also set up a Git [HERE]. Feel free to branch, commit back, comment etc.

The only dependency to compile the MATLAB MEX or a C++ project using the library is the Eigen3 linear algebra [WWW] framework. The package provides pre-compiled MEX files for Mac OS X 64bit and Ubuntu 64bit systems.

Before launching training and prediction, make sure your data attributes are normalized between -1 and 1!

Want an idea how fast the algorithm on some large dataset is? The algorithm provides state-of-the-art classification results on the Covertype database [WWW] that features 581012 samples with 54 dimensions. Training on a random half of the database with 25 hidden neurons took only 0.72 seconds on a single core of Intel Xeon E3 1245 v2 core! Approximately the same amount of time was needed to classify the remaining half of the data.

On low power and low memory devices manipulating *much* smaller datasets, such as Raspberry Pi, learning and classification will still be carried out in a fraction of second.

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

Longest Common Subsequence MATLAB MEX

fig376_01

In many applications, e.g. biological applications, one wants to compare or match two sequences. For example, two organisms have their own DNA sequence [WIKI] and a comparison would reveal their similarity based on molecules called bases (Adenine, Guanine, Cytosine and Thymine). Looking for longest common substring could be of limited use, since some random insertions may hinder the matching score. A similarity measure obtained from Longest Common Subsequence algorithm output finds a substring such that the order of matching elements is preserved but is not necessarily consecutive.

There is a number of algorithms solving this Dynamic Programming [WIKI] problem which is generally O(mn). The optimization may be done either in time or space while the complexity of the algorithm still remains the same. I developed a straightforward version of the algorithm for MATLAB [WWW] in MEX which uses memory to gain some speed. Sure, there is a pure MATLAB implementation [HERE] but it uses heavily the for loops which will slow down the execution.

Download the source code and compiled MEX file for Mac 64bit [HERE]. The implementation should be considered as beta since it has been verified to work on few simple, easy to verify examples.

The function expects two int32 arrays. Many existing solutions work with character strings only and thus limited to 255 element vocabulary. This implementation allows to use much richer vocabulary (more than 4 billion).

If you want to improve the straightforward version, take a look at Generalized Suffix Trees [WWW] solving the LCS problem [HERE] with some code.

Credits go to [WWW], [WWW], [WWW] and [WWW].

Posted in Uncategorized | Tagged , , , , | 1 Comment

[EIGEN] How to get in and out data from Eigen matrix

Eigen is a C++ template library [WWW] for linear algebra. You can write expressions involving matrices and vectors as in Matlab [WWW]. No more need to create your own home-made code for matrix addition, multiplication, linear equation solving etc.

There are multiple libraries out on the web doing matrix operations and around. The well known are in OpenCV [WWW], Armadillo [WWW] and Eigen of our choice. A nice comparative study is given in [WWW], showing relative performances on different matrix sizes and operations (add, multiply, transpose, inverse and SVD). In this blog post we will stick with Eigen.

All is great once you have your data inside the Eigen data structures. You can multiply them, add, use in solvers, invert, work on rows, columns, blocks etc etc etc. In practice, two questions inevitably arises:

  1. I have an array of doubles. How to put it into the Eigen matrix?
  2. I have an Eigen matrix. How to get the values out into a plain array without loops?

The good news that you don’t need to do loopy code to get your data out from the Eigen objects. You can map the existing memory occupied by a double array to an Eigen object and vice versa! Great, eh? How it works? How to do this? Gimme some code!

Suppose you have an array with double values of size nRows x nCols.

double *X; // non-NULL pointer to some data

You can create an nRows x nCols size double matrix using the Map functionality like this:

MatrixXd eigenX = Map<MatrixXd>( X, nRows, nCols );

The Map operation maps the existing memory region into the Eigen’s data structures. A single line like this allows to avoid to write ugly code of matrix creation, for loop with copying each element in good order etc.

Now what if you have got an Eigen matrix with some result and you want to get out a plain double array? Guess what, you can use the Map once again!

MatrixXd resultEigen;   // Eigen matrix with some result (non NULL!)

double *resultC;                // NULL pointer

Map<MatrixXd>( resultC, resultEigen.rows(), resultEigen.cols() ) = resultEigen;

After the Map operation, the resultC pointer will now point to a beginning of a memory region with your computation result. Just make sure you pass by also the size information of the array. By default the layout is column-major, that is the elements are wrote out column by column from the Eigen matrix.

That’s all folks! Don’t forget to look and even print out the great tutorial [WWW]. It shows many typical use cases on small examples.

Matlab users: print out and check out the correspondence between Matlab code and Eigen code [HERE].

Posted in Uncategorized | Tagged , , , , , | 2 Comments