This lesson summarizes the topics we'll be covering in section 35 and why they'll be important to you as a data scientist.
You will be able to:
- Understand and explain what is covered in this section
- Understand and explain why the section will help you to become a data scientist
In this section we will be moving on to a second unsupervised learning technique: Clustering! Clustering techniques are very powerful when you want to group data with similar characteristics together, but have no pre-specified labels. The main goal of clustering is to create clusters that have a high similarity between the data belonging to one cluster, while aiming at in a low similarity between clusters.
We start by providing a basic intuition of the K-Means Clustering algorithm. When using the K-Means clustering algorithm, the number of clusters which you want to obtain is specified upfront, and the algorithm aims at the most "optimal" cluster centers given that there are K clusters.
A second branch of clustering algorithms is hierarchical agglomerative clustering. Using hierarchical clustering, unlike K-Means clustering, you don't decide on the number of clusters beforehand. Instead, you start with
A very common and useful application of clustering is market segmentation. You'll practice your clustering skills on a market segmentation data set!
At the end of this section, you'll learn how semi-supervised learning techniques, which are increasingly popular in machine learning, combines both concepts of supervised and unsupervised learning.
In this section, you'll learn how to use clustering techniques, which are very useful for finding patterns and grouping unlabeled data together.