Rukshan PramodithaThe Intuition behind the Universal Approximation Theorem for Neural NetworksCan neural networks approximate any non-linear function?Dec 21, 20232Dec 21, 20232
Rukshan PramodithaMulti-Fedlity Optimization for Hyperparameter Tuning in Multi-Layer Perceptrons (MLPs)Utilizing Coarse-to-Fine Search (CFS) for an informed search of best neural network hyperparametersAug 5, 2023Aug 5, 2023
Rukshan PramodithaAutoencoders vs t-SNE for Dimensionality ReductionWhile preserving spatial relationships between data points in both lower and higher dimensionsJun 30, 20231Jun 30, 20231
Rukshan PramodithaGenerating Flower Images Using Deep Convolutional GANsImplementation of CNN-based GANs (DCGANs) with Keras to generate natural-looking, realistic flower imagesApr 13, 20231Apr 13, 20231
InData Science 365byRukshan PramodithaUsing Inbuilt Datasets with TensorFlow Datasets (TFDS)Take advantage of ready-to-use datasets with TensorFlow for your ML and DL tasksMar 26, 2023Mar 26, 2023
InTowards Data SciencebyRukshan PramodithaPCA vs Autoencoders for a Small Dataset in Dimensionality ReductionNeural Networks and Deep Learning Course: Part 45Feb 16, 20231Feb 16, 20231
InTowards Data SciencebyRukshan PramodithaConvolutional vs Feedforward Autoencoders for Image DenoisingCleaning corrupted images using convolutional and feedforward autoencodersJan 24, 20231Jan 24, 20231
InData Science 365byRukshan PramodithaAltering the Sampling Distribution of the Training Dataset to Improve Neural Network’s Accuracy3 Methods you should know in 2023Dec 7, 2022Dec 7, 2022
InData Science 365byRukshan PramodithaA Short Introduction to GANs in Generative Deep LearningThe battle between two adversariesDec 3, 20221Dec 3, 20221
InData Science 365byRukshan PramodithaDeep Learning Hardware Selection Guide for 2023To run deep learning models incredibly fasterOct 31, 20221Oct 31, 20221
InData Science 365byRukshan PramodithaLearning Rate Schedules and Decay in Keras OptimizersOptions for changing the learning rate during trainingOct 5, 2022Oct 5, 2022
InData Science 365byRukshan PramodithaPlotting the Learning Curve to Analyze the Training Performance of a Neural NetworkTo detect overfitting and underfitting as well as slow convergence, oscillating, oscillating with divergenceSep 29, 2022Sep 29, 2022
InTowards Data SciencebyRukshan PramodithaClassification of Neural Network HyperparametersBy network structure, learning and optimization, and regularization effectSep 16, 2022Sep 16, 2022
InData Science 365byRukshan PramodithaHow the Dimension of Autoencoder Latent Vector Affects the Quality of Latent RepresentationHyperparameter tuning in autoencoders — Part 2Sep 8, 2022Sep 8, 2022
InTowards Data SciencebyRukshan PramodithaHow to Choose the Optimal Learning Rate for Neural NetworksGuidelines for tuning the most important neural network hyperparameter with examplesSep 21, 2022Sep 21, 2022
InData Science 365byRukshan PramodithaDetermining the Right Batch Size for a Neural Network to Get Better and Faster ResultsGuidelines for choosing the right batch size to maintain optimal training speed and accuracy while saving computer resourcesSep 26, 2022Sep 26, 2022
InData Science 365byRukshan PramodithaBatch Normalization Explained in Plain EnglishTheory and implementation in KerasAug 30, 2022Aug 30, 2022
InData Science 365byRukshan PramodithaAll You Need to Know about Batch Size, Epochs and Training Steps in a Neural NetworkAnd the connection between them explained in plain English with examplesAug 26, 20222Aug 26, 20222
InTowards Data SciencebyRukshan PramodithaHow Number of Hidden Layers Affects the Quality of Autoencoder Latent RepresentationHyperparameter tuning in autoencoders — Part 1Aug 23, 2022Aug 23, 2022
InData Science 365byRukshan PramodithaHow covariate shift happens in neural networksAnd eliminating it with batch normalizationSep 3, 2022Sep 3, 2022