WP Tutorials

TensorFlow 2.0 Complete Course – Python Neural Networks for Beginners Tutorial

TensorFlow 2.0 Complete Course – Python Neural Networks for Beginners Tutorial

TensorFlow 2.0 Complete Course – Python Neural Networks for Beginners Tutorial



Learn how to use TensorFlow 2.0 in this full tutorial course for beginners. This course is designed for Python programmers looking to enhance their knowledge and skills in machine learning and artificial intelligence.

Throughout the 8 modules in this course you will learn about fundamental concepts and methods in ML & AI like core learning algorithms, deep learning with neural networks, computer vision with convolutional neural networks, natural language processing with recurrent neural networks, and reinforcement learning.

Each of these modules include in-depth explanations and a variety of different coding examples. After completing this course you will have a thorough knowledge of the core techniques in machine learning and AI and have the skills necessary to apply these techniques to your own data-sets and unique problems.

⭐️ Google Colaboratory Notebooks ⭐️

📕 Module 2: Introduction to TensorFlow – https://colab.research.google.com/drive/1F_EWVKa8rbMXi3_fG0w7AtcscFq7Hi7B#forceEdit=true&sandboxMode=true
📗 Module 3: Core Learning Algorithms – https://colab.research.google.com/drive/15Cyy2H7nT40sGR7TBN5wBvgTd57mVKay#forceEdit=true&sandboxMode=true
📘 Module 4: Neural Networks with TensorFlow – https://colab.research.google.com/drive/1m2cg3D1x3j5vrFc-Cu0gMvc48gWyCOuG#forceEdit=true&sandboxMode=true
📙 Module 5: Deep Computer Vision – https://colab.research.google.com/drive/1ZZXnCjFEOkp_KdNcNabd14yok0BAIuwS#forceEdit=true&sandboxMode=true
📔 Module 6: Natural Language Processing with RNNs – https://colab.research.google.com/drive/1ysEKrw_LE2jMndo1snrZUh5w87LQsCxk#forceEdit=true&sandboxMode=true
📒 Module 7: Reinforcement Learning – https://colab.research.google.com/drive/1IlrlS3bB8t1Gd5Pogol4MIwUxlAjhWOQ#forceEdit=true&sandboxMode=true

⭐️ Course Contents ⭐️

⌨️ (00:03:25) Module 1: Machine Learning Fundamentals
⌨️ (00:30:08) Module 2: Introduction to TensorFlow
⌨️ (01:00:00) Module 3: Core Learning Algorithms
⌨️ (02:45:39) Module 4: Neural Networks with TensorFlow
⌨️ (03:43:10) Module 5: Deep Computer Vision – Convolutional Neural Networks
⌨️ (04:40:44) Module 6: Natural Language Processing with RNNs
⌨️ (06:08:00) Module 7: Reinforcement Learning with Q-Learning
⌨️ (06:48:24) Module 8: Conclusion and Next Steps

⭐️ About the Author ⭐️

The author of this course is Tim Ruscica, otherwise known as “Tech With Tim” from his educational programming YouTube channel. Tim has a passion for teaching and loves to teach about the world of machine learning and artificial intelligence. Learn more about Tim from the links below:
🔗 YouTube: https://www.youtube.com/channel/UC4JX40jDee_tINbkjycV4Sg
🔗 LinkedIn: https://www.linkedin.com/in/tim-ruscica/

Learn to code for free and get a developer job: https://www.freecodecamp.org

Read hundreds of articles on programming: https://freecodecamp.org/news

source

Comments (47)

  1. Thanks!

  2. 01:44:17 ARE WE TRAINING IT OR NOT ?

  3. 01:40:40 why did you made a function inside another one ( input_function inside make_input_fn ) ?

  4. I know I am really late to this, but as of now, is it necessary to learn estimators in Tensorflow as compared to ScikitLearn? In Tensorflow, estimators are on their final update, and they are also a lot more complicated (at least for me) to implement. I was just wondering if I need to learn them in order to fully grasp Tensorflow.

  5. Hi

  6. Hi

  7. Thing I needed to update as I went thru the course (will update as I go)

    1:13:12

    – Package change to scikit-learn : !pip install -q scikit-learn

  8. leaning it in 2024 and feels like i'll be here a long time. not because of the teaching style which is good but because there is so much to grasp.

  9. thank you so much

  10. parch= "parent/child." It refers to the number of parents and children a passenger had aboard

  11. Can someone explain why it is necessary to have a function inside the function for training? The make_input_fn and input_fn part and the lambda both uses this. Why can't we just use the input_fn directly?

  12. great job!

  13. 2:49:39 RAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHH RAAAAAAAAAAAAAAAAAAAAHHHHHHHHHHHHHHHH🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥 HE SAID IIIITTTTT!!!!!!!!!!!!!!! ONE PIECE!!!!!!!!! RAAAAAAAAAAH🔥

  14. 4:25:20 where can I find cats_vs_dogs file?

  15. Thank you a lot.Quite useful. Summary : I am not gonna talk about it.

  16. PSA: The estimator API has been removed as of tensorflow for any versions after 2.15.0

  17. 00:05 Introduction to TensorFlow 2.0 course for beginners.
    02:26 Introduction to Google Collaboratory for easy machine learning setup
    07:07 AI encompasses machine learning and deep learning
    09:35 Neural networks use layered representation of data in machine learning.
    14:12 Data is crucial in machine learning and neural networks
    16:37 Features are input information and labels are output information.
    21:07 Supervised learning involves guiding the model to make accurate predictions by comparing them to the actual labels
    23:21 Unsupervised machine learning involves clustering data points without specific output data.
    27:57 Training reinforcement models to maximize rewards in an environment.
    30:00 Introduction to TensorFlow and its importance
    34:36 Understanding the relation between computations and sessions in TensorFlow
    36:52 Google Collaboratory allows easy access to pre-installed modules and server connection.
    41:11 Importing TensorFlow in Google Collaboratory for TensorFlow 2.0
    43:17 Tensors are fundamental in TensorFlow 2.0
    47:58 Explanation of tensors and ranks
    50:12 Understanding TensorFlow tensor shapes and ranks
    54:41 Reshaping Tensors in TensorFlow
    56:47 Using TF session to evaluate tensor objects
    1:01:16 Different categories of machine learning algorithms
    1:03:07 Linear regression for data prediction
    1:07:22 Calculating the slope of a line using a triangle and dividing distances
    1:09:29 Predicting values using the line of best fit
    1:13:31 Overview of important Python modules like NumPy, pandas, and matplotlib
    1:15:43 Predicting survival on the Titanic using TensorFlow 2.0
    1:19:40 Splitting data into training and testing sets is crucial for model accuracy.
    1:21:48 Separating the data for classification
    1:26:09 Exploring dataset statistics and shape attributes
    1:28:12 Understanding the data insights from the analysis
    1:32:21 Handling categorical and numeric data in TensorFlow
    1:34:39 Creating feature columns for TensorFlow model training
    1:38:42 Epochs are used to feed data multiple times for better model training
    1:40:55 Creating an input function for TensorFlow data set objects
    1:45:19 Creating an estimator and training the model in TensorFlow
    1:47:21 Explanation on how to access and interpret statistical values from a neural network model.
    1:51:46 Exploring survival probabilities based on indices
    1:53:52 Introduction to classification in TensorFlow 2.0
    1:58:01 Data frames in TensorFlow 2.0 contain encoded species already, simplifying data preprocessing.
    2:00:08 Creating input function and feature columns in TensorFlow 2.0
    2:04:26 Setting up the neural network and defining the number of nodes and classes.
    2:06:35 Using lambda functions to create chained functions
    2:10:44 Creating a prediction function for specific flowers
    2:12:46 Explaining the process of predicting on a single value
    2:17:25 Clustering helps find clusters of like data points
    2:19:50 Data points are assigned to clusters based on distance to centroids.
    2:24:02 Understanding K means clustering
    2:26:09 Hidden Markov model uses states and observations with associated probabilities.
    2:30:36 Defining transition and observation probabilities in two states
    2:32:56 Hidden Markov Model predicts future events based on past events
    2:37:22 Explanation of transition probabilities and observation distribution in a Hidden Markov Model
    2:39:31 Mismatch between TensorFlow versions
    2:43:45 Hidden Markov models are used for probability-based predictions.
    2:45:35 Introduction to neural networks and their working principle.
    2:50:00 Designing the output layer for neural networks
    2:52:19 Neural networks make predictions based on probability distributions for each class.
    2:56:39 Introduction to biases as trainable parameters in neural networks
    2:58:53 Neural network nodes determine values using weighted sums of connected nodes.
    3:03:21 Explanation of different activation functions in neural networks
    3:05:38 Sigmoid function is chosen for output neuron activation
    3:10:00 Loss function measures the deviation of the neural network output from the expected output.
    3:12:25 Understanding the concept of cost function and gradient descent
    3:17:01 Neural networks update weights and biases to make better predictions with more data.
    3:19:17 Loading and exploring the fashion amnesty dataset for training and testing neural networks.
    3:23:54 Data pre processing is crucial for neural networks
    3:25:54 Pre-processing images is crucial for training and testing in neural networks
    3:30:26 Selecting optimizer, loss, and metrics for model compilation
    3:32:33 Training and testing a neural network model in TensorFlow 2.0
    3:36:51 Training with less epochs can lead to better model performance
    3:39:00 Understanding predictions and probability distribution
    3:43:34 TensorFlow deep learning model used for computer vision and classification tasks.
    3:45:42 Images are represented by three color channels: red, green, and blue
    3:50:09 Convolutional neural networks analyze features and patterns in images.
    3:52:19 Convolutional neural networks use filters to identify patterns in images
    3:56:49 Quantifying presence of filters using dot product
    3:58:52 Understanding filter similarity in TensorFlow 2.0
    4:03:09 Padding, Stride, and Pooling Operations in Convolutional Neural Networks
    4:05:17 Pooling operations reduce feature map size
    4:09:30 Loading and normalizing image data for neural networks
    4:11:41 Understanding the input shape and layer breakdown
    4:15:58 Optimizing model performance with key training strategies
    4:17:59 Data augmentation is crucial for training convolutional neural networks with small datasets.
    4:22:12 Utilizing pre-trained models for efficient neural network training
    4:24:19 Modifying last layers of a neural network for classifying
    4:28:24 Using pre-trained model, MobileNet v2, built into TensorFlow
    4:30:31 Freezing the base model to prevent retraining
    4:34:45 Evaluation of model with random weights before training.
    4:36:58 Saving and loading models in TensorFlow
    4:41:00 Natural Language Processing (NLP) is about understanding human languages through computing.
    4:43:19 Sentiment analysis and text generation using natural language processing model
    4:47:46 Introduction to bag of words technique in neural networks
    4:49:54 Bag of words technique encodes sentences with the same representation, losing their meaning.
    4:54:13 Word embeddings aim to represent similar words with similar numbers to address issues with arbitrary mappings.
    4:56:25 Introduction to word embeddings in a 3D space
    5:00:59 Difference between feed forward and recurrent neural networks
    5:03:22 Explanation of processing words sequentially in a neural network
    5:08:01 Introduction to Simple RNN and LSTM layers
    5:10:29 Long Short Term Memory (LSTM) allows access to output from any previous state.
    5:14:53 Padding sequences to ensure equal length for neural network input
    5:17:02 Creating a neural network model for sentiment analysis
    5:21:24 Evaluating model accuracy and preparing for predictions
    5:23:49 Explanation of padding and sequence processing in TensorFlow 2.0
    5:28:20 Analyzing sentiment impact on prediction accuracy
    5:30:27 Training neural network to generate text sequences
    5:34:48 Creating mapping from characters to indices
    5:37:09 Creating training examples for TensorFlow neural network model
    5:41:53 Batching and model building process in TensorFlow 2.0
    5:44:07 Setting model parameters and layers in TensorFlow 2.0
    5:49:05 Explaining model predictions for each element in batch and sequence length
    5:51:26 The model outputs a tensor for each training example, and we need to create our own loss function to determine its performance.
    5:56:05 Training neural networks with varying epochs for performance evaluation
    5:58:29 Generating output sequences using TensorFlow model
    6:02:53 Processing steps for text data in TensorFlow 2.0
    6:05:05 Building and training the model with different batch sizes and checkpoints
    6:09:25 Reinforcement learning involves an agent exploring an environment to achieve objectives.
    6:11:43 States, Actions, and Rewards in Reinforcement Learning
    6:16:24 Q matrix represents predicted rewards for actions in states.
    6:18:43 Maximize agent's reward in the environment
    6:23:21 Introducing exploration in reinforcement learning
    6:25:26 Balancing Q table and random actions in Q learning algorithm
    6:30:03 Discount factor helps in factoring future rewards into the equation for finding the best action in the next state.
    6:32:16 Introduction to OpenAI Gym for training reinforcement learning models
    6:36:46 Introduction to navigating a frozen lake environment using q learning.
    6:38:54 Max steps and learning rate in reinforcement learning
    6:43:05 Training the agent using Q-learning algorithm
    6:45:18 Training process involves adjusting epsilon and monitoring reward progress.
    6:49:39 Focus on a specific area in machine learning or AI for deeper learning.
    6:51:47 Largest open source machine learning course in the world focused on TensorFlow and Python.
    Crafted by Merlin AI.

  18. I could probably help you to become an artist if you might want to do that. It may help you to take a look at "Drawing on the Right Side of the Brain" book. I think there is even Kindle version. Love lambda.

  19. The example on Titanic I think you misspoke. It is a linear classifier and not linear regression so that everyone knows.

  20. you explain too well thanks

  21. Update @ 1:13:12: 'sklearn' has been replaced by scikit-learn

  22. Someone help me understand this

    epoch = blocks that contain rows taken from the dataset
    batch_size = number of rows in each epoch

    no two epochs contain similar rows of data? as in if one epoch contains row 122, another epoch will not contain row 122.

  23. Thank you

  24. Olá, alguém consegue traduzir os comentários deste vídeo para outro idioma?

  25. it seems that tensors linear regression has been depreciated, does anyone know how to do it in the newer versions

  26. CAUTION: The Vido is outdated, you can use the video for concepts but code wise TensorFlow has deprecated many modules that are used in the code he mentioned.

  27. 🎯 Key Takeaways for quick navigation:

    00:00 🎓 Introduction to Course and Audience
    – Aimed at beginners in machine learning and artificial intelligence with basic programming knowledge.
    03:16 📚 Course Structure and Resources
    – Course breakdown, starting with machine learning and AI basics.
    10:16 🤖 Understanding Artificial Intelligence, Machine Learning, and Neural Networks
    – Definition of Artificial Intelligence (AI) as automating intellectual tasks.
    14:25 📊 Importance of Data in Machine Learning
    – Example dataset creation for student grades.
    16:17 📊 Features and Labels Basics
    – Features are input information for machine learning models.
    17:42 📈 Importance of Data in Machine Learning
    – Data is essential for creating machine learning models.
    19:35 🧠 Types of Machine Learning: Supervised Learning
    – Supervised learning involves having both features and labels.
    22:43 🌐 Types of Machine Learning: Unsupervised Learning
    – Unsupervised learning deals with only features and no labels.
    25:25 🤖 Types of Machine Learning: Reinforcement Learning
    – Reinforcement learning involves an agent, environment, and reward.
    30:10 🧠 Introduction to TensorFlow and Module Structure
    – TensorFlow is an open-source machine learning library by Google.
    31:38 🚀 What can be done with TensorFlow?
    – TensorFlow supports various machine learning tasks and neural networks.
    32:54 🧠 TensorFlow Overview
    – TensorFlow provides a library of tools for machine learning applications.
    36:02 🚀 Getting Started with Google Collaboratory
    – Google Collaboratory allows using Jupyter Notebooks in the cloud.
    – Specify TensorFlow version in Collaboratory with `%tensorflow_version 2.x`.
    43:29 🧮 Understanding Tensors
    – Tensors are a generalization of vectors and matrices to potentially higher dimensions.
    47:42 📏 Rank and Degree of Tensors
    49:03 📊 Understanding Tensor Rank and Shape
    52:13 🔄 Reshaping Tensors in TensorFlow
    55:30 🧠 Types of Tensors in TensorFlow
    56:51 🔄 Evaluating Tensors using Sessions
    01:00:06 🤖 Introduction to Core Machine Learning Algorithms
    01:04:20 📈 Linear Regression Basics
    01:05:40 🔄 Using Linear Regression in Prediction
    01:10:33 📊 Linear Regression in Three Dimensions
    01:12:29 🔍 Examples of Linear Regression
    01:15:48 🛳️ Titanic Dataset for Linear Regression
    01:19:06 📊 Data Preparation: Understanding Columns in Dataset
    01:20:02 📊 Data Preparation: Creating Training and Testing Sets
    01:21:19 📊 Data Exploration: Pandas DataFrames and Descriptive Stats
    01:26:57 📊 Data Visualization: Creating Histograms and Plots
    01:29:47 📊 Data Understanding: Training and Testing Sets Analysis
    01:30:14 📊 Feature Columns: Categorical and Numeric Data
    01:33:55 🧮 Feature Columns for Categorical Data
    01:36:14 📊 Feature Columns for Numeric Data
    01:37:08 🔄 Training Process Overview
    01:40:20 🤖 Input Function Creation
    01:45:02 🤖 Creating Linear Estimator
    01:46:25 🚂 Model Training Process
    01:47:48 📈 Model Evaluation and Predictions
    01:55:04 📊 Introduction to Classification
    01:57:25 🗃️ Loading and Preparing Dataset
    02:00:08 🔄 Input Function
    02:01:36 🧮 Feature Columns
    02:03:33 🧠 Building a Deep Neural Network Classifier
    02:04:26 🧠 Neural Network Architecture
    02:04:56 🤖 Training the Model
    02:07:43 🧾 Training Output and Evaluation
    02:09:58 🔄 Model Evaluation and Prediction
    02:11:53 📊 Predictions on New Data
    02:17:20 🤔 Introduction to Clustering (K-Means)
    02:20:07 🌐 K-Means Clustering Overview
    02:25:14 📊 Hidden Markov Models Introduction
    02:28:22 🎲 States, Observations, and Transitions
    02:33:56 🔍 Purpose of Hidden Markov Models
    02:35:44 🌡️ Hidden Markov Model Introduction,
    02:37:09 📊 TensorFlow Probability Distributions,
    02:38:33 📉 Building the Hidden Markov Model,
    02:41:40 🔄 Modifying Probabilities and Observing Changes,
    02:45:47 🧠 Introduction to Neural Networks,
    02:50:45 🧠 Neural Network Layers and Output Design
    – Single output neuron with a value between 0 and 1 for binary classification.
    – Multiple output neurons for predicting probabilities in a classification task.
    02:53:07 🔗 Hidden Layer in Neural Networks
    02:54:28 🌐 Connectivity: Weights and Biases in Neural Networks
    03:00:25 ⚙️ Weighted Sum, Bias, and Information Flow
    03:02:39 🔄 Activation Functions in Neural Networks
    03:06:50 *URL](https://youtu.be/tPYj3fFJGjk?t=11210s) 🧠 Activation Functions*
    03:08:11 *URL](https://youtu.be/tPYj3fFJGjk?t=11691s) 📊 Moving to Higher Dimensions*
    03:09:32 *URL](https://youtu.be/tPYj3fFJGjk?t=11747s) 📉 Loss Function Basics*
    03:12:50 *URL](https://youtu.be/tPYj3fFJGjk?t=11870s) ⚙️ Optimizing with Gradient Descent*
    03:18:01 *URL](https://youtu.be/tPYj3fFJGjk?t=12881s) 🔄 Building the First Neural Network*
    03:23:08 🖼️ Image and Label Exploration
    03:24:27 🔄 Data Pre-processing
    03:27:13 🧠 Model Creation
    03:30:04 🤖 Compiling the Model
    03:32:20 ⚙️ Training the Model
    03:35:05 🧪 Testing and Evaluating the Model
    03:38:19 🖼️ Overview of Image Prediction
    03:39:16 🧠 Understanding Predictions with Arrays
    03:40:07 🥿 Decoding Predictions to Class Names
    03:41:25 🤖 Verifying Predictions Script
    03:43:17 🌐 Introduction to Convolutional Neural Networks (CNN)
    03:45:11 🖼️ Understanding Image Data Dimensions
    03:47:01 🔄 Global vs. Local Patterns in Neural Networks
    03:51:11 📊 Convolutional Layer Output Feature Maps
    03:54:23 🎨 Convolutional Neural Network Overview
    03:55:19 🖼️ Looking for Filters in Images
    03:56:40 📊 Dot Product and Feature Maps
    04:00:23 🔄 Padding, Stride, and Computational Efficiency
    04:04:59 🏞️ Pooling Operations
    04:08:10 🚀 Building a Convolutional Neural Network with Keras
    04:09:30 🖼️ Loading CIFAR-10 Dataset and Normalization
    04:12:40 🧱 Convolutional Base Summary
    04:14:51 🧠 Adding Dense Layers for Classification
    04:15:49 🎓 Model Training and Evaluation
    04:18:37 🔄 Data Augmentation
    04:22:42 🤖 Using Pre-trained Models
    04:24:10 🤖 Using Pre-trained Models
    04:25:33 📊 Loading and Preprocessing Data
    04:26:56 🖼️ Image Reshaping and Scaling
    04:29:43 🧠 Picking a Pre-trained Model
    04:31:55 🧊 Freezing the Base Model
    04:33:19 🏗️ Adding Custom Classifier
    04:34:39 🎓 Model Compilation and Evaluation
    04:36:58 🚂 Training the Model
    04:37:57 💾 Saving and Loading Models
    04:38:24 🔍 Introduction to Object Detection
    04:38:50 🧠 Understanding TensorFlow and Facial Recognition
    04:41:09 🗣️ Natural Language Processing with Recurrent Neural Networks
    04:42:35 📈 Applications of Recurrent Neural Networks
    04:44:24 📊 Challenges in Textual Data Processing
    04:51:55 🔠 Issues with Direct Word-to-Integer Encoding
    04:54:39 📊 Understanding Word Representation Challenges
    04:55:32 🛠️ Word Embeddings Overview and Visualization
    04:58:44 🧠 Word Embeddings as a Layer in Neural Networks
    04:59:41 🔢 Preparing Textual Data for Neural Networks
    05:02:26 🔄 Unraveling Recurrent Neural Network Layers
    05:09:49 🚀 Long Short-Term Memory (LSTM) Layers
    05:11:12 🧠 Understanding Long Short-Term Memory (LSTM)
    05:12:34 🎥 Sentiment Analysis on Movie Reviews
    05:13:55 📊 Data Preprocessing for Neural Network Input
    05:17:11 🧠 Building and Compiling the LSTM Model
    05:21:43 📈 Model Evaluation and Results
    05:24:19 🧐 Making Predictions with the Trained Model
    05:26:44 🤖 Overview of Text Processing
    05:28:09 📈 Sentiment Analysis Example
    05:30:23 🎭 Recurrent Neural Network for Text Generation
    05:31:48 🧠 Data Loading and Preprocessing
    05:34:06 🧮 Encoding Characters and Creating Functions
    05:37:18 🚀 Creating Training Examples
    05:38:39 🔄 Mapping Sequences and Creating Batches
    05:41:23 🏗️ Building the Model
    05:42:46 🏗️ Building the Model Architecture
    05:47:13 📉 Creating a Loss Function
    05:55:57 🚂 Compiling and Training the Model
    05:57:23 🔄 Rebuilding Model for Inference
    05:59:41 🧠 Understanding Text Generation with RNNs
    06:07:20 🤖 Recap and Guidance on Complex ML Concepts
    06:08:18 🎮 Introduction to Reinforcement Learning
    06:09:42 🔄 Key Concepts: Environment, Agent, State, Action, Reward
    06:15:10 🧠 Introduction to Q-Learning in Reinforcement Learning
    06:15:36 🤖 Q-Learning Introduction
    06:17:50 🎨 Q-Learning Example on Whiteboard
    06:19:14 🕹️ Navigating the Environment and Learning the Q-Table
    06:25:15 🔄 Learning the Q-Table – Constants and Update Formula
    06:31:40 🧠 Q-Learning and OpenAI Gym Introduction
    06:32:35 🎮 OpenAI Gym Environment Setup
    06:33:58 📊 Constants and Environment Setup
    06:35:21 🔄 Picking Actions in Q-Learning
    06:42:25 🔄 Updating Q-Values
    06:44:39 🚀 Training Q-Learning Model
    06:45:58 📈 Training Results and Graph
    06:47:17 🤖 Q-Learning Example Conclusion
    06:48:13 🏁 Conclusion of Reinforcement Learning Module
    06:48:41 🚀 Next Steps and Further Learning Recommendations
    06:50:26 🎓 Advice for Specialization and General Exploration
    06:51:47 🏆 Course Conclusion and Call to Action

    Made with HARPA AI

  28. This video is gold. I am a MSc student in AI and I literally use this video as a reference to understand some topics that are poorly explained in the modules. I've watched 5/7 hours

  29. at 1:52:09 , how do you conclude that the survival probability is the last one and not the first one ?

  30. Currently on hour 2. But I am really hoping that I can come out of this with just a little bit more knowledge of machine learning!

  31. Thank you freecodecamp for providing us with such an amazing course. But just a polite request, could you may do a few videos on Machine Learning or Ai using C++ instead of Python?

  32. Thank you TIM 🙂

  33. Parch typically refers to number of people in family aboard. The siblings column would affirm parent status, and the fare column would be indication of social economic factors.

  34. You're explanations are great. It's almost 2024 – maybe time to invest in Adobe AE….or even PowerPoint….? Hand-drawn diagrams with spelling errors detract from the overall quality of your course.

  35. Windows 11, wasted time @1:13:12 trying to install sklearn (old name??). Apparently colab already has it. "!pip install -U scikit-learn" shows "Requirement already satisfied".

  36. Hello, I have a question regarding the Module 3. Once we train our linear regression model, so that its accuracy is good enough. How can I withdraw it to work with it in future examples?

  37. We should give AI some human moral values to abide by before setting rules! Right?

  38. this one suitable for 2024 also? or shud look for any latest update pls advice

Leave your thought here

Your email address will not be published. Required fields are marked *

Enable Notifications OK No thanks