Posted by on Oct 10, 2017 in #datascience, #machinelearning | 0 comments

This blog post is a part of my own personal development within data science and machine learning. With these blog posts I will share my learnings as a way to encapsulate the knowledge and bring more people on board. If you find any mistakes or alternatives approaches, feel free to comment and provide your feedback.

Visualizing your data is one of the best ways to understand it. There are many different approaches to “seeing” the data – different chart types, dimensions, etc. In this blog post, we will look at the Breast Cancer Wisconsin (Diagnostic) Data Set (https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)). The data set is easy to get using scikit-learn (http://scikit-learn.org/stable/index.html), as its built in with the load_breast_cancer (http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html) function.

The data set contains 569 samples of features from digitized images of a fine needle aspirate of breast mass. It has 30 dimensions, which we will scale down into a manageable dimension so that we can visualize it – in 3D. We will also create a heat map of the features to gain an understanding of how the scaling/decomposition affects the data set.

DS6

We will be using Python to work with the data set. Additionally, we will use Jupyter Notebook to host the visualizations. You can use any environment that you like, if you want to follow along. If you don’t have an environment configured, you can use tools like Azure Machine Learning Workbench (https://docs.microsoft.com/en-gb/azure/machine-learning/preview/quickstart-installation) or Anaconda (https://anaconda.org/).

Loading data

We will first be importing the libraries we need. These will help us to load the data, perform data processing and visualizations. We will be using the following libraries:

  • NumPy – powerful array object and more.
  • pandas – data structures and data analysis tools and more.
  • scikit-learn – machine learning tools.
  • Plotly – graphic library for interactive graphs and more.

Notice that we will only be importing some parts of some libraries.

Next up we can load our data set using the load_breast_cancer function. This will give as an object that contains the samples, features, feature names and labels.  We will then create a DataFrame object of the data, this will make it much easier to transform and work with the data.

If we want to view our data, we can use the head function on the DataFrame object.

DS8

Pre-process and decompose data

Next up we will start working with the data to fit our scenario. A common practice for many machine learning estimators is to standardize the features. For instance, if an individual feature is very different in scale, compared to the other features (normally distributed) – the algorithm may behave badly.

We will use a StandardScaler and fit it to our data. Finally, we will use it to transform the data.

We’re now ready to decompose (factorize) our data. To goal is to break down the data from 30 dimensions to 3 dimensions – so that we can visualize it.  We will use a statistical procedure called principal component analysis (PCA for short, https://en.wikipedia.org/wiki/Principal_component_analysis). This technique is great for finding patterns while trying to retain the variation in the data set. The artifacts of such an analysis is called principal components, which we use to try to explain as much of the variance (in the data set) as possible. This means that some of the variance may be lost.

Much like the step before, we will use a PCA and fit it to our data. This is then used to transform the data.

We know that we will most likely not retain all of the variance. We can view it using the explained_variance_ratio_ field.

DS9

Each element in the array shows you how much variance each principal component can explain. Notice that the last principal component/dimension only retains 9.39% of the variance. In some cases, adding additional dimensions will not do wonders.

In this case, we are able to retain 72.64% of the variance.

Visualize principal components

If we want to understand how each feature correlates with each principal component, we can visualize it using a heat map. The first thing we need to do is to configure the Plotly library. This is easily done as such:

We will then create the heat map data using the Heatmap function (within go). The heat map data is created by supplying the principal components as the z-axis. The x-axis is the feature and the y-axis is the principal component. Finally, we can plot the heat map using the iplot function.

Since we created the heat map using Plotly, it is also interactive. Notice how we can hover over individual parts to discover more about it.

DS10

Visualize data

Finally, let’s visualize the transformed data. The first thing we will do is to split benign and malignant samples into individual data sets. We will do this by first adding this piece of data to the existing transformed data. This label (benign/malignant) for each sample is loaded with the load_breast_cancer function from before.

We can now create the visualizations. We will use the Scatter3d function (within go) for each data set to create scatter data. We will also create a bit of styling and separate the two data sets by color and name. This way we can distinguish between data points in the visualization.

Once the scatter data and layout has been created, we can plot using the iplot function.

As with the heat map, this plot is also interactive. This allows you to view the data from different angles to understand how your data fits within 3 dimensions. Pretty neat I must say!

DS12

This data set works well to visualize using this approach. We can clearly see that the two different labels cluster together well – with some outliers. What’s fascinating is that we have been able to reduce the data set from a dimension impossible for us to imagine – into an interactive 3D scatter plot.  By doing this we can gain a better understanding of our data.

Be aware that our principal component analysis does have a significant cost, as we are not able to retain all the variance in the data set. This always depends on your data set, and you will see different results from different data sets. But I would like to say, that an exciting thought is if we could derive (or support) any conclusions from future data points, by using this. In that case, we could also reduce one more dimension (to 2D) so that we can leverage any common libraries for supervised clustering.

You can read more about the data set here: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)

-Simon Jaeger