All the materials are available in the below link

Visit for data sscience blogs
Time Stamp:

00:00:00 Introduction
00:01:25 AI Vs ML vs DL vs Data Science
00:07:56 Machine LEarning and Deep Learning
00:09:05 Regression And Classification
00:18:14 Linear Regression Algorithm
01:07:14 Ridge And Lasso Regression Algorithms
01:33:08 Logistic Regression Algorithm
02:13:52 Linear Regression Practical Implementation
02:28:30 Ridge And Lasso Regression Practical Implementation
02:54:21 Naive Baye’s Algorithms
03:16:02 KNN Algorithm Intuition
03:23:47 Decision Tree Classification Algorithms
03:57:05 Decision Tree Regression Algorithms
04:02:57 Practical Implementation Of Deicsion Tree Classifier
04:09:14 Ensemble Bagging And Bossting Techniques
04:21:29 Random Forest Classifier And Regressor
04:29:58 Boosting, Adaboost Machine Learning Algorithms
04:47:30 K Means Clustering Algorithm
05:01:54 Hierarichal Clustering Algorithms
05:11:28 Silhoutte Clustering- Validating Clusters
05:17:46 Dbscan Clustering Algorithms
05:25:57 Clustering Practical Examples
05:35:51 Bias And Variance Algorithms
05:43:44 Xgboost Classifier Algorithms
06:00:00 Xgboost Regressor Algorithms
06:19:04 SVM Algorithm Machine LEarning Algorithm
———————————————————————————————————————
►Data Science Projects:

►Learn In One Tutorials

Statistics in 6 hours:

Machine Learning In 6 Hours:

Deep Learning 5 hours :

►Learn In a Week Playlist

Statistics:

Machine Learning :

Deep Learning:

NLP :

►Detailed Playlist:

Stats For Data Science In Hindi :

Machine Learning In English :

Machine Learning In Hindi :

Complete Deep Learning:

source

div style="text-align: center;">
35 thoughts on “Complete Machine Learning In 6 Hours| Krish Naik”
  1. I don't know if my comment gets an attention or not but I just wanted to say that krish naik sir ki ek ek video itni jyada acchi h and main sbko recommend krungi ye videos agr apko data science data analyst machine learning ki intern and job Leni h. Krish sir …thank you soo much…apki videos se meko foreign me internship mil gyi h…apki videos is so so good…thank you so much…and these are so far the best videos in terms of everything when it comes to learning❤❤❤❤

  2. Understanding R-Squared and Adjusted R-Squared

    Scenario

    Suppose we are working on a problem where we aim to predict the price of a house. Initially, we use one feature—the number of bedrooms—and obtain an R-squared (Δ²) value of 85%. This means that 85% of the variation in house prices is explained by the number of bedrooms.

    Next, we add another feature—the location of the house—which is strongly correlated with house price. As expected, the R-squared value increases to 90%, indicating an improved model.

    Now, we introduce an irrelevant feature, such as the gender of the person living in the house. Gender has no logical relationship with house price, yet the R-squared value increases to 91%.

    This happens because R-squared always increases (or remains the same) when new features are added, even if those features are not actually useful. This can lead to a misleading interpretation, as a model with irrelevant variables might appear to perform better than a simpler, more meaningful model.

    The Problem with R-Squared

    R-squared increases when more features are added, even if they are not useful.

    It does not penalize for adding irrelevant variables.

    A model with a higher R-squared may not necessarily be the best model.

    Solution: Adjusted R-Squared

    To counteract this issue, we use Adjusted R-Squared, which modifies the R-squared value by penalizing unnecessary features.

    Formula for Adjusted R-Squared

    where:

    = R-squared value

    = Number of observations (data points)

    = Number of independent variables (features)

    Why Adjusted R-Squared?

    Prevents misleading improvement: Unlike R-squared, adjusted R-squared does not always increase when new features are added.

    Penalizes unnecessary variables: If an added feature does not improve the model significantly, adjusted R-squared will decrease instead of increasing.

    Helps in feature selection: It ensures that only meaningful features contribute to the model’s predictive power.

    Effect of Increasing Predictors (P)

    As we increase the number of predictors (p), the denominator (n – p – 1) decreases. If the newly added feature is not correlated with the target variable, the numerator (1 – R²)(n – 1) remains large. When dividing a larger numerator by a smaller denominator, the fraction increases, making 1 – (larger fraction) smaller. This leads to a decrease in adjusted R-squared, even though R-squared itself may have increased.

    This explains why adjusted R-squared is always less than or equal to R-squared.

    Conclusion

    While R-squared is a good indicator of model performance, it can be misleading when adding unnecessary features. Adjusted R-squared provides a more reliable evaluation by penalizing irrelevant variables, ensuring the model remains both accurate and interpretable. Additionally, since Adjusted R-Squared accounts for the number of predictors, it will always be less than or equal to R-Squared—an important concept often tested in interviews.

  3. Sir, if the value of the weight is already close to nil, then squaring it would make it closer to 0 instead of increasing it. So wouldn't Ridge Regression be a better algorithm for eliminating unimportant features in this case?

  4. I am currently watching this. I am at 12:00 and I am already loving it.
    I search a lot of channels about Machine learning but the teaching style was not helping me. But here I really liked his way of explaining everything

  5. I appreciate the content. It provides in depth clarity as well as connection of next topic with the previous topic covering why it is needed, what it is. This makes the flow of the content easy to grasp and remember. Amazing!

  6. the dataset for practical implementation of regression techniques has been removed from the source(scikit datasets)
    File ~AppDataLocalProgramsPythonPython313Libsite-packagessklearndatasets__init__.py:161, in __getattr__(name)

    110 if name == "load_boston":

    111 msg = textwrap.dedent(

    112 """

    113 `load_boston` has been removed from scikit-learn since version 1.2.

    (…)

    159 """

    160 )

    –> 161 raise ImportError(msg)

    162 try:

    163 return globals()[name]

    )

Leave a Reply

Your email address will not be published. Required fields are marked *