How to Segment and Monitor Skin Anomalies Over Time Using Computer Vision

I'll explore techniques for segmenting, mapping, and tracking changes in skin anomalies (e.g., moles, spots, vessels) using dermatological images captured with everyday devices (e.g., mobile phones). I'll focus on classical computer vision methods (e.g., morphology, filtering) as well as state-of-the-art deep learning techniques – including CNNs, U-Nets, and transformers. I’ll include supervised, unsupervised, and non-learning methods, with practical Python implementations using OpenCV and PyTorch, as well as research papers and open-source tools suitable for long-term tracking of skin changes, including under low contrast conditions.


Classical Non-Learning Methods (Thresholding, Edges, and Morphology)

These algorithmic approaches process image data without the need for training. A basic example is thresholding pixel brightness or color. Otsu’s method provides unsupervised segmentation based on image histograms (PubMed). Thresholding typically isolates darker lesions from lighter surrounding skin. Under poor contrast, adaptive thresholds and histogram equalization improve performance.

Morphological operations (dilation, erosion, opening/closing) help eliminate noise and artifacts, smoothen edges, and refine segmentation masks. These techniques improve lesion masks post-thresholding (PubMed).

To remove hairs, which appear as thin lines, edge detection filters or clustering-based techniques are used. For example, hair inpainting based on max filters and DBSCAN clustering was demonstrated in this Stack Overflow example.

Other classical methods include edge detection (Canny, Sobel) and active contours (snakes, level-sets). Active contours iteratively adjust curves to match lesion boundaries and are especially effective when initialized well (Elsevier).

More advanced techniques combine methods, such as watershed transforms with morphological filtering and fuzzy clustering. Rout et al. (2021) used a multiscale morphological filter, watershed, and Fuzzy C-Means (FCM) to separate lesions from healthy skin (MDPI).

🔧 Open-Source Tools:

  • OpenCV: thresholding, edge detection, morphology, watershed, inpainting.

  • scikit-image: threshold_otsu, active contours, watershed.

  • scikit-learn: clustering methods like DBSCAN.

Unsupervised Segmentation Methods (Clustering, Generative Models)

Unsupervised methods detect patterns without labeled data. One example is color clustering using k-means or Fuzzy C-Means to classify pixels into lesion vs. background groups. This is effective where color contrast is present (MDPI).

More advanced: Generative Adversarial Networks (GANs). In 2023, Innani et al. introduced Efficient-GAN (EGAN), which segments lesions without labeled masks (PubMed). EGAN uses adversarial and morphological losses, achieving Dice ~90%, Jaccard ~83.6% on the ISIC dataset. Mobile-GAN, a lightweight version, is suitable for smartphones.

🛠 Open-Source Tools:

  • scikit-learn / scikit-image: clustering and fuzzy segmentation.

  • PyTorch: build custom GANs.

  • SAAL (Self-supervised Active Learning): GitHub offers self-supervised training and active learning.

Deep Convolutional Networks (CNNs)

Supervised CNNs can learn detailed segmentation from labeled datasets. Early CNNs were used for lesion classification; modern ones support semantic segmentation.

🔍 Key Architectures:

  • Mask R-CNN: combines detection and segmentation; used in melanoma diagnosis (PMC).

  • U-Net: a fully convolutional encoder-decoder network, originally from biomedical imaging (MDPI).

Monica et al. (2023) showed Mask R-CNN effectively highlights lesion contours (PMC).

📚 Open-Source Resources:

  • Detectron2, MMDetection: Mask R-CNN frameworks.

  • segmentation_models.pytorch: U-Net, FPN, DeepLab, PSPNet.

  • skin-lesion-segmentation projects: GitHub repos with ISIC challenge models.

  • Unet-Pytorch: Example training code.

  • LB-UNet: Lightweight model for low-resource settings.

U-Net and Variants

U-Net remains a top choice for lesion segmentation, especially with enhancements:

  • ResUNet: adds ResNet encoders (MDPI).

  • Attention U-Net, Dual U-Net: improve precision under low contrast.

Trained U-Nets adapt well even to smartphone macro images. Color normalization, hair removal, and filtering are helpful pre-processing steps.

🛠 Tools:

Transformer-Based Approaches

Transformers (e.g., ViT) model global context but lack fine-grained edge localization. Hybrid models (CNN + Transformer) perform best.

Notable Models:

  • TC-Net: Combines ViT and CNN features. Improved Dice by ~2.5% over Swin-UNet (PLOS One).

  • TAFM-Net (2024): Combines EfficientNetV2 + Transformer (arXiv).

Other architectures: TransUNet, DuaSkinSeg, SPA-Former – often available on GitHub.

🛠 Tools:

  • timm PyTorch library: Vision Transformers.

  • HuggingFace Transformers: ViT support.

  • Custom U-Net + ViT hybrids using PyTorch.

Monitoring Lesion Changes Over Time

To track lesion changes across images:

  1. Image Registration: Use OpenCV’s findHomography or estimateAffinePartial2D to align images.

  2. Change Detection: XOR masks, compare Dice coefficients, measure area or longest diameter changes.

3D mapping (e.g., Revisiting Lesion Tracking in 3D) offers advanced body tracking but is less practical for home use.

Mobile Tools:

  • MoleMapper: maps moles by body zone, tracks over time.

  • Mel GitHub: catalog, rotate, and compare mole photos.

For change detection, you can also train a CNN that compares image pairs (similar to satellite change detection).

Summary

Combining classical, deep, and unsupervised segmentation with tracking enables comprehensive skin analysis from mobile photos. Open-source tools like OpenCV, PyTorch, and U-Net implementations allow for robust lesion mapping and long-term monitoring. Whether for research or personal use, these methods empower early detection and dermatological insights.

References:

Next Post Previous Post
No Comment
Add Comment
comment url