Rukshan Pramodithat-SNE Visualization with Yellowbrick — A Fast and Easy MethodJun 7, 2023Jun 7, 2023
InTowards Data SciencebyRukshan PramodithaHow t-SNE Outperforms PCA in Dimensionality ReductionPCA vs t-SNE for visualizing high-dimensional data in a lower-dimensional spaceMay 23, 2023May 23, 2023
InTowards Data SciencebyRukshan PramodithaNon-Negative Matrix Factorization (NMF) for Dimensionality Reduction in Image DataDiscussing theory and implementation with Python and Scikit-learnMay 6, 20232May 6, 20232
InTowards Data SciencebyRukshan Pramoditha11 Dimensionality reduction techniques you should know in 2021Reduce the size of your dataset while keeping as much of the variation as possibleApr 14, 20219Apr 14, 20219
InTowards Data SciencebyRukshan PramodithaSingular Value Decomposition vs Eigendecomposition for Dimensionality ReductionPerforming PCA using both methods and comparing the resultsMar 20, 20231Mar 20, 20231
InData Science 365byRukshan Pramoditha2 Plots That Help Me to Choose the Right Number of Principal ComponentsCreating the cumulative explained variance plot and the scree plot in PCAMar 12, 20231Mar 12, 20231
InData Science 365byRukshan PramodithaAn In-depth Guide to PCA with NumPyThrough eigenvalue analysisFeb 27, 2023Feb 27, 2023
InData Science 365byRukshan PramodithaDo we need feature scaling before Linear Discriminant Analysis (LDA)?LDA for dimensionality reduction with and without feature scalingJan 20, 20231Jan 20, 20231
InTowards Data SciencebyRukshan Pramoditha11 Different Uses of Dimensionality ReductionThe whole ML is full of dimensionality reduction and its applications. Let’s see them in action!Dec 8, 20211Dec 8, 20211
InTowards Data SciencebyRukshan PramodithaPCA vs Autoencoders for a Small Dataset in Dimensionality ReductionNeural Networks and Deep Learning Course: Part 45Feb 16, 20231Feb 16, 20231
InTowards Data SciencebyRukshan PramodithaDimensionality Reduction for Linearly Inseparable DataNon-linear dimensionality reduction using kernel PCADec 20, 2022Dec 20, 2022
InData Science 365byRukshan Pramoditha3 Easy Steps to Perform Dimensionality Reduction Using Principal Component Analysis (PCA)Running the PCA algorithm twice is the most effective way of performing PCAJan 3, 20234Jan 3, 20234
InTowards Data SciencebyRukshan PramodithaLDA Is More Effective than PCA for Dimensionality Reduction in Classification DatasetsLinear discriminant analysis (LDA) for dimensionality reduction while maximizing class separabilityDec 29, 20222Dec 29, 20222
InTowards Data SciencebyRukshan PramodithaHow to Select the Best Number of Principal Components for the DatasetSix methods you should followApr 24, 20222Apr 24, 20222
InTowards Data SciencebyRukshan PramodithaHow Autoencoders Outperform PCA in Dimensionality ReductionDimensionality reduction with autoencoders using non-linear dataAug 19, 20221Aug 19, 20221
InTowards Data SciencebyRukshan PramodithaUsing PCA to Reduce Number of Parameters in a Neural Network by 30x TimesWhile still getting even better performance! — Neural Networks and Deep Learning Course: Part 17Jun 12, 2022Jun 12, 2022
InTowards Data SciencebyRukshan PramodithaRGB Color Image Compression Using Principal Component AnalysisPCA in Action for Dimensionality ReductionMar 29, 20222Mar 29, 20222
InData Science 365byRukshan PramodithaPrincipal Component Analysis — 18 Questions AnsweredOne-stop place for your most of the questions regarding PCAMar 18, 20222Mar 18, 20222
InData Science 365byRukshan PramodithaEigendecomposition of a Covariance Matrix with NumPyFor Principal Component Analysis (PCA)Mar 13, 2022Mar 13, 2022
InTowards Data SciencebyRukshan PramodithaHow to Mitigate Overfitting with Dimensionality ReductionAddressing the problem of overfitting — Part 3Sep 27, 2021Sep 27, 2021