Hierarchical Clustering
From CS Wiki
Revision as of 15:43, 1 December 2024 by Dendrogram (talk | contribs) (새 문서: '''Hierarchical Clustering''' is a clustering method in machine learning and statistics that builds a hierarchy of clusters by either merging smaller clusters into larger ones (agglomerative) or dividing larger clusters into smaller ones (divisive). It is widely used for exploratory data analysis and in domains such as bioinformatics, marketing, and social network analysis. ==Types of Hierarchical Clustering== Hierarchical clustering is divided into two main types: *'''Agglomera...)
Hierarchical Clustering is a clustering method in machine learning and statistics that builds a hierarchy of clusters by either merging smaller clusters into larger ones (agglomerative) or dividing larger clusters into smaller ones (divisive). It is widely used for exploratory data analysis and in domains such as bioinformatics, marketing, and social network analysis.
Types of Hierarchical Clustering[edit | edit source]
Hierarchical clustering is divided into two main types:
- Agglomerative (Bottom-Up):
- Starts with each data point as its own cluster.
- Iteratively merges the closest clusters until all points are in a single cluster or a stopping criterion is met.
- Divisive (Top-Down):
- Starts with all data points in one cluster.
- Iteratively splits clusters into smaller clusters until each point is its own cluster or a stopping criterion is met.
Steps in Hierarchical Clustering[edit | edit source]
- Calculate a distance matrix that quantifies the similarity between each pair of data points (e.g., using Euclidean distance).
- Apply a linkage method to define the distance between clusters.
- Perform the clustering:
- For agglomerative clustering, merge the two closest clusters.
- For divisive clustering, split clusters based on a criterion.
- Visualize the resulting hierarchy using a dendrogram.
Example[edit | edit source]
A simple example of hierarchical clustering in Python:
from scipy.cluster.hierarchy import dendrogram, linkage
import matplotlib.pyplot as plt
import numpy as np
# Example dataset
data = np.array([[1, 2], [2, 3], [3, 4], [10, 10], [11, 11], [12, 12]])
# Perform hierarchical clustering
Z = linkage(data, method='ward')
# Plot dendrogram
plt.figure(figsize=(8, 4))
dendrogram(Z)
plt.title("Dendrogram")
plt.xlabel("Data Points")
plt.ylabel("Distance")
plt.show()
Linkage Methods[edit | edit source]
The choice of linkage method determines how distances between clusters are calculated:
- Single Linkage: Uses the minimum distance between any two points in different clusters.
- Complete Linkage: Uses the maximum distance between any two points in different clusters.
- Average Linkage: Uses the average distance between all pairs of points in two clusters.
- Ward's Method: Minimizes the total variance within clusters.
Advantages[edit | edit source]
- Does not require the number of clusters to be specified beforehand.
- Produces a dendrogram that provides insight into the data's structure and relationships.
- Can handle non-spherical clusters better than K-Means.
Limitations[edit | edit source]
- Computationally expensive for large datasets due to the calculation of distance matrices.
- Sensitive to noise and outliers, which can affect the clustering process.
- Difficult to scale to very large datasets.
Applications[edit | edit source]
Hierarchical clustering is used in various domains:
- Bioinformatics: Grouping genes or proteins based on similarity.
- Marketing: Segmenting customers into distinct groups for targeted strategies.
- Document Clustering: Organizing documents based on textual similarity.
- Social Network Analysis: Understanding community structures within networks.