Currently Empty: £0.00
Cluster analysis is a staple of unsupervised machine learning and data science.
It is very useful for data mining and big data because it automatically finds patterns in the data, without the need for labels, unlike supervised machine learning.
In a real-world environment, you can imagine that a robot or an artificial intelligence won’t always have access to the optimal answer, or maybe there isn’t an optimal correct answer. You’d want that robot to be able to explore the world on its own, and learn things just by looking for patterns.
Do you ever wonder how we get the data that we use in our supervised machine learning algorithms?
We always seem to have a nice CSV or a table, complete with Xs and corresponding Ys.
If you haven’t been involved in acquiring data yourself, you might not have thought about this, but someone has to make this data!
Those “Y”s have to come from somewhere, and a lot of the time that involves manual labor.
Sometimes, you don’t have access to this kind of information or it is infeasible or costly to acquire.
But you still want to have some idea of the structure of the data. If you’re doing data analytics automating pattern recognition in your data would be invaluable.
This is where unsupervised machine learning comes into play.
In this course we are first going to talk about clustering. This is where instead of training on labels, we try to create our own labels! We’ll do this by grouping together data that looks alike.
There are 2 methods of clustering we’ll talk about: k-means clustering and hierarchical clustering.
Next, because in machine learning we like to talk about probability distributions, we’ll go into Gaussian mixture models and kernel density estimation, where we talk about how to “learn” the probability distribution of a set of data.
One interesting fact is that under certain conditions, Gaussian mixture models and k-means clustering are exactly the same! We’ll prove how this is the case.
All the algorithms we’ll talk about in this course are staples in machine learning and data science, so if you want to know how to automatically find patterns in your data with data mining and pattern extraction, without needing someone to put in manual work to label that data, then this course is for you.
All the materials for this course are FREE. You can download and install Python, Numpy, and Scipy with simple commands on Windows, Linux, or Mac.
This course focuses on “how to build and understand“, not just “how to use”. Anyone can learn to use an API in 15 minutes after reading some documentation. It’s not about “remembering facts”, it’s about “seeing for yourself” via experimentation. It will teach you how to visualize what’s happening in the model internally. If you want more than just a superficial look at machine learning models, this course is for you.
“If you can’t implement it, you don’t understand it”
-
Or as the great physicist Richard Feynman said: “What I cannot create, I do not understand”.
-
My courses are the ONLY courses where you will learn how to implement machine learning algorithms from scratch
-
Other courses will teach you how to plug in your data into a library, but do you really need help with 3 lines of code?
-
After doing the same thing with 10 datasets, you realize you didn’t learn 10 things. You learned 1 thing, and just repeated the same 3 lines of code 10 times…
Suggested Prerequisites:
-
matrix addition, multiplication
-
probability
-
Python coding: if/else, loops, lists, dicts, sets
-
Numpy coding: matrix and vector operations, loading a CSV file
WHAT ORDER SHOULD I TAKE YOUR COURSES IN?:
-
Check out the lecture “Machine Learning and AI Prerequisite Roadmap” (available in the FAQ of any of my courses, including the free Numpy course)
K-Means Clustering
-
1Introduction
-
2Course Outline
-
3What is unsupervised learning used for?
This lecture describes what unsupervised machine learning (not just clustering) is used for in general.
There are 2 major categories:
1) density estimation
If we can figure out the probability distribution of the data, not only is this a model of the data, but we can then sample from the distribution to generate new data.
For example, we can train a model to read lots of Shakespeare and then generate writing in the style of Shakespeare.
2) latent variables
This allows us to find the underlying cause of the data we've observed by reducing it to a small set of factors.
For example, if we measure the heights of all the people in our class and plot them on a histogram, we may notice 2 "bumps".
These "bumps" correspond to male heights and female heights.
Thus, being male or female is the hidden cause of higher / lower height values.
Clustering does exactly this - it tells us how the data can be split up into distinct groups / segments / categories.
Unsupervised machine learning can also be used for:
dimensionality reduction - modern datasets can have millions of features, but many of them may be correlated
visualization - you can't see a million-dimensional dataset, but if you reduce the dimensionality to 2, then it can be visualized
-
4Why Use Clustering?
-
5Where to get the code
-
6How to Succeed in this Course
Hierarchical Clustering
-
7An Easy Introduction to K-Means Clustering
-
8Hard K-Means: Exercise Prompt 1
-
9Hard K-Means: Exercise 1 Solution
-
10Hard K-Means: Exercise Prompt 2
-
11Hard K-Means: Exercise 2 Solution
-
12Hard K-Means: Exercise Prompt 3
-
13Hard K-Means: Exercise 3 Solution
-
14Hard K-Means Objective: Theory
-
15Hard K-Means Objective: Code
-
16Soft K-Means
-
17The Soft K-Means Objective Function
-
18Soft K-Means in Python Code
-
19How to Pace Yourself
-
20Visualizing Each Step of K-Means
-
21Examples of where K-Means can fail
-
22Disadvantages of K-Means Clustering
-
23How to Evaluate a Clustering (Purity, Davies-Bouldin Index)
-
24Using K-Means on Real Data: MNIST
-
25One Way to Choose K
-
26K-Means Application: Finding Clusters of Related Words
-
27Clustering for NLP and Computer Vision: Real-World Applications
-
28Suggestion Box
Gaussian Mixture Models (GMMs)
-
29Visual Walkthrough of Agglomerative Hierarchical Clustering
-
30Agglomerative Clustering Options
Learn about the different possible distance metrics that can be used for both k-means and agglomerative clustering, and what constitutes a valid distance metric. Learn about the different linkage methods for hierarchical clustering, like single linkage, complete linkage, UPGMA, and Ward linkage.
-
31Using Hierarchical Clustering in Python and Interpreting the Dendrogram
-
32Application: Evolution
-
33Application: Donald Trump vs. Hillary Clinton Tweets
Setting Up Your Environment (FAQ by Student Request)
-
34Gaussian Mixture Model (GMM) Algorithm
-
35Write a Gaussian Mixture Model in Python Code
-
36Practical Issues with GMM / Singular Covariance
-
37Comparison between GMM and K-Means
-
38Kernel Density Estimation
-
39GMM vs Bayes Classifier (pt 1)
-
40GMM vs Bayes Classifier (pt 2)
-
41Expectation-Maximization (pt 1)
-
42Expectation-Maximization (pt 2)
-
43Expectation-Maximization (pt 3)
-
44Future Unsupervised Learning Algorithms You Will Learn
Extra Help With Python Coding for Beginners (FAQ by Student Request)
Effective Learning Strategies for Machine Learning (FAQ by Student Request)
How long do I have access to the course materials?
You can view and review the lecture materials indefinitely, like an on-demand channel.
Can I take my courses with me wherever I go?
Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don't have an internet connection, some instructors also let their students download course lectures. That's up to the instructor though, so make sure you get on their good side!
Stars 5
2887
Stars 4
1739
Stars 3
305
Stars 2
61
Stars 1
52