Hello! Thank you for your question. It is true that we lose some amount of variance in the original data due to the reduction of dimensionality. Our goal should be to keep as much of the variance in the original data as possible while reducing dimensionality in the data. It is key to identify the most critical principal components that hold most of the variance in the data. You can plot the percentage of total variance captured by the principal components and decide which components to keep. Read Questions 14 and 15 in my article (https://rukshanpramoditha.medium.com/principal-component-analysis-18-questions-answered-4abd72041ccd) to learn more about this. PCA removes multicollinearity between input features in the data and also removes noise in the data. It also reduces the complexity of the model. However, if you ignore a key component that captures a significant amount of variance in the data, the model’s accuracy will be reduced. Therefore, it is key to identify the most critical principal components by creating the plot mentioned above and use them to build the model. Please kindly note that the model’s accuracy depends on other factors too. For example, it depends on the random state of splitting train-test sets, amount of data, class imbalance, number of trees in the mode (if the model is a random forest or a similar variant), number of iterations we set during optimization, outliers and missing values in the data, etc.
Hope this will be helpful for you! If you have any further questions regarding this, feel free to ask them in the comment section.