/Interview-Questions-Data-Scientist-Positions

This repository contains top 500 (ongoing) question for Data Scientist Positions in various companies.

data_science_topics

Interview Questions - Data Scientist Positions ( Entry, Mid & Senior)

This repository contains the curated list of topic wise questions for Data Scientist Positions in various companies.

⭐ - Entry Level positions
⭐⭐ - Mid Level positions
⭐⭐⭐ - Senior positionsLevel

Topics (Ongoing) No Of Questions
1. Data Science & ML - General Topics 34
2. Regression Techniques 22
3. Classification Techniques 39
3.1 Support Vector Machines 12
3.2 Decision Tree 16
3.2 Boosting( GBM, Light GBM, CatBoost) 5
3.2 Naive Bayes Classifier 5*
4. Stats & probabality 5
5. Deep Learning Fundamentals 15*
5.1 CNN 14
6 Attention & Transformers 10*

Data Science & ML - General Topics

  1. What is the basic difference between AI, Machine Learning(ML) & Deep Learning(DL)?. ⭐

Ans: Artificial Intelligence (AI) is a broad field that encompasses many different techniques and technologies, including machine learning (ML) and deep learning (DL).

  • Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. It is a broad field that includes many different approaches and techniques, such as rule-based systems, and expert systems.
  • Machine Learning (ML) is a subfield of AI that is focused on the development of algorithms and statistical models that enable machines to learn from data and make predictions or decisions without being explicitly programmed.
    1- Deep Learning (DL) is a type of machine learning that is inspired by the structure and function of the brain's neural networks. It uses multiple layers of artificial neural networks to learn representations of data with multiple levels of abstraction. DL algorithms can be used for tasks such as image and speech recognition, natural language processing, and decision-making.

al_ml_dl


  1. Can you explain the difference between supervised and unsupervised learning?. ⭐

Ans: The main difference between them is the type and amount of input provided to the algorithms.

  • Supervised learning is a type of machine learning where the model is trained on a labeled dataset, i.e., the model is provided with input-output pairs, and the goal is to learn a mapping from inputs to outputs. This mapping can then be used to make predictions on new, unseen data. Examples of supervised learning include regression, classification and prediction tasks.
  • Unsupervised learning, on the other hand, is a type of machine learning where the model is not provided with labeled data. Instead, the algorithm is given a dataset without any output labels, and the goal is to find patterns or structure within the data. Examples of unsupervised learning include clustering, dimensionality reduction, and anomaly detection tasks.

  1. How do you handle missing data in your dataset? What are some common techniques for imputing missing values?. ⭐

Ans: There are several techniques for handling missing data in a dataset, some of the most common include:

  • Mean/Median Imputation: This method replaces the missing value with the mean or median of the non-missing values in the same column. (Numerical)
  • Random Sample Imputation: This method replaces the missing value with a random sample from the non-missing values in the same column. (Numerical)
  • Most Frequent Imputation Most Frequent is another statistical strategy to impute missing values which work with categorical features (strings or numerical representations) by replacing missing data with the most frequent values within each column. (Numerical/Categorical)
  • Zero or Constant Imputation By this method, the missing values are replaced by zero/constant values.(Numerical)
  • Regression Imputation: By this method, the missing values are predicted using Regression techniques such as KNN Imputation, and Logistic Regression for Categorical missing values.(Numerical/Categorical)
  • Multivariate Imputation by Chained Equation (MICE): This type of imputation works by filling in the missing data multiple times. Multiple Imputations (MIs) are much better than a single imputation as it measures the uncertainty of the missing values in a better way.

  1. How do you select the appropriate evaluation metric for a given problem, and what are the trade-offs between different metrics such as precision, recall, and F1-score?. ⭐

Ans: Selecting the appropriate evaluation metric for a given problem depends on the characteristics of the data and the goals of the model. Here are some common evaluation metrics and the situations in which they are typically used:

  • Accuracy: This metric measures the proportion of correct predictions made by the model. It is commonly used when the classes are well balanced and the goal is to have a high overall performance of the model.

  • Precision: This metric measures the proportion of true positive predictions to all positive predictions made by the model. It is commonly used when the goal is to minimize false positives. (Ex- Financial Transactions/ Spam Mail)

  • Recall: This metric measures the proportion of true positive predictions to all actual positive cases in the dataset. It is commonly used when the goal is to minimize false negatives. (Ex-Medical diseases/Stock Market breakdown)

  • F1-Score: This metric is the harmonic mean of precision and recall. It balances the trade-off between precision and recall and is commonly used when the goal is to have a balance between both.

  • ROC-AUC: This metric measures the area under the Receiver Operating Characteristic curve and is commonly used when the classes are imbalanced and the goal is to have a high overall performance of the model.


  1. What is the beta value implies in the F-beta score? What is the optimum beta value?. ⭐⭐

    Ans: The F-beta score is a variant of the F1-score, where the beta value controls the trade-off between precision and recall. The F1-score is a harmonic mean of precision and recall and is calculated as (2 * (precision * recall)) / (precision + recall).

  • A beta value of 1 is equivalent to the F1 score, which means it gives equal weight to precision and recall.
  • A beta value less than 1 gives more weight to precision, which means it will increase the importance of precision over recall.
  • A beta value greater than 1 gives more weight to recall, which means it will increase the importance of recall over precision. fbeta

  1. What are the advantages & disadvantages of Linear Regression?.⭐

Ans:

  • Advantages:

    • Linear regression is a simple and interpretable algorithm that is easy to understand and implement.
    • It is a well-established technique that has been widely used for many years and is a good benchmark for other more complex algorithms.
    • Linear regression can handle large datasets with many input features and it's relatively fast and efficient to train.
    • It's easy to identify and measure the relationship between input features and target variables by looking at the coefficients.
  • Disadvantages:

    • Linear regression assumes that the relationship between the input features and the target variable is linear, which may not always be the case in real-world problems.
    • It can't capture non-linear relationships between variables.
    • Linear regression is sensitive to outliers, as a single outlier can greatly impact the line.
    • Linear regression does not account for errors in the observations.
    • Linear regression can be affected by multicollinearity, which occurs when input features are highly correlated with each other.

  1. How do you handle categorical variables in a dataset?.⭐

Ans:Handling categorical variables in a dataset is an important step in the preprocessing of data before applying machine learning models. Here are some common techniques for handling categorical variables:

  • One-Hot Encoding: This method creates a new binary column for each unique category in a categorical variable. Each row is then encoded with a 1 or 0 in the corresponding column, depending on the category. This method is useful when there is no ordinal relationship between categories.
  • Ordinal Encoding: This method assigns an integer value to each category in a categorical variable. This method is useful when there is an ordinal relationship between categories.
  • Binary Encoding: This method assigns a binary code to each category. This method is useful when the number of categories is high and one-hot encoding creates too many columns.
  • Count Encoding: This method replaces each category with the number of times it appears in the dataset.
  • Target Encoding: This method replaces each category with the mean of the target variable for that category.
  • Frequency Encoding: This method replaces each category with the frequency of the category in the dataset. It's important to note that some of the techniques, like One-Hot Encoding, Ordinal Encoding, and Binary Encoding, have the potential to introduce a new feature, which could affect the model performance. *Additionally, target encoding and count encoding could introduce a leakage from the target variable, which could lead to overfitting.

📙 Back to Top Section

  1. What is the curse of dimensionality and how does it affect machine learning?.⭐

Ans:The curse of dimensionality refers to the problem of increasing complexity and computational cost in high-dimensional spaces. In machine learning, the curse of dimensionality arises when the number of features in a dataset is large relative to the number of observations. This can cause problems for several reasons:

  • Sparsity: With a high number of features, most observations will have many missing or zero values for many of the features. This can make it difficult for models to learn from the data.
  • Overfitting: With a high number of features, models are more likely to fit the noise in the data rather than the underlying patterns. This can lead to poor performance on new, unseen data.
  • Computational cost: High-dimensional spaces require more memory and computational power to store and process the data. This can make it difficult to train models and make predictions in real-world applications.

  1. What are the approaches to mitigate Dimensionality reduction?.⭐

Ans:These are some mechanisms to deal with Dimensionality reduction,

  • Techniques like principal component analysis (PCA), linear discriminant analysis (LDA), or t-distributed stochastic neighbor embedding (t-SNE) can be used to reduce the number of features by combining or selecting a subset of the original features.
  • Regularization: Techniques like L1 or L2 regularization can help prevent overfitting by adding a penalty term to the model's objective function that discourages the model from fitting to noise in the data.
  • Sampling: With high-dimensional data, it is often infeasible to use all the data. In such cases, random sampling could be used to reduce the size of the data to work with.
  • Ensemble methods: Ensemble methods like random forests and gradient boosting machines can be used to reduce overfitting and improve generalization performance in high-dimensional data.

  1. Can you explain the bias-variance tradeoff?.⭐

Ans:The bias-variance tradeoff is a fundamental concept in machine learning that describes the trade-off between how well a model fits the training data (bias) and how well the model generalizes to new, unseen data (variance).

  • Bias refers to the error introduced by approximating a real-world problem, which may be incredibly complex, with a much simpler model. High-bias models are typically considered to be "oversimplified" and will have a high error on the training set.
  • On the other hand, variance refers to the error introduced by the model's sensitivity to small fluctuations in the training data. High variance models are typically considered to be "overcomplicated" or "overfit" and will have a high error on the test set. bias-varinace_trade_off

  1. How do you prevent overfitting in a model?.⭐

Ans: Overfitting occurs when a model is too complex and captures the noise in the training data, instead of the underlying patterns. This can lead to poor performance on new, unseen data. Here are some common techniques for preventing overfitting:

  • Regularization: Techniques like L1, L2 regularization, or dropout, add a penalty term to the model's objective function that discourages the model from fitting to noise in the data.
  • Early stopping: This technique is used when training deep neural networks, it monitors the performance of the model on a validation set and stops the training process when the performance on the validation set starts to degrade.
  • Cross-validation: This technique involves dividing the data into several subsets and training the model on different subsets while evaluating the model performance on the remaining subsets. This technique helps to get a better estimate of the model's performance on unseen data.
  • Ensemble methods: Ensemble methods like random forests and gradient boosting machines can be used to reduce overfitting and improve generalization performance. Ensemble methods combine the outputs of multiple models to produce a more robust prediction.
  • Feature selection or dimensionality reduction: By reducing the number of features, it can decrease the complexity of the model and prevent overfitting.
  • Simplifying the model: By simplifying the model architecture or reducing the number of parameters, it can decrease the complexity of the model and prevent overfitting.

  1. What is Hypothesis Testing. Explain with proper example..⭐

Ans: Hypothesis testing is a statistical method used to determine whether a claim or hypothesis about a population parameter is true or false. The process of hypothesis testing involves making an initial assumption or hypothesis about the population parameter and then using sample data to test the validity of that assumption.

  • There are two types of hypotheses in hypothesis testing: the null hypothesis (H0) and the alternative hypothesis (H1). The null hypothesis states that there is no difference or relationship between the population parameter of interest and the sample data, while the alternative hypothesis states that there is a difference or relationship.
  • The process of hypothesis testing involves several steps:
    1. State the null and alternative hypotheses
    2. Choose a level of significance (alpha)
    3. Collect and analyze sample data
    4. Calculate a test statistic and its corresponding p-value
    5. Compare the p-value to the level of significance (alpha)
    6. Make a decision and interpret the results.
      For example, let's say a company wants to know if the mean weight of their product is equal to 50 grams. The null hypothesis (H0) would be that the mean weight of the product is equal to 50 grams, and the alternative hypothesis (H1) would be that the mean weight of the product is not equal to 50 grams. The company would then take a sample of products and calculate the mean weight of the sample. Using statistical methods, they would determine if the sample mean is statistically significantly different from 50 grams. If the sample mean is statistically significantly different from 50 grams, the company would reject the null hypothesis and conclude that the mean weight of the product is not equal to 50 grams.

  1. What is Type 1 & Type 2 error?.⭐

Ans: *Type 1 error, also known as a false positive, occurs when the null hypothesis is rejected, but it is actually true. In other words, it is a mistake of rejecting a null hypothesis that is true. The probability of making a Type 1 error is represented by the level of significance (alpha) chosen before the hypothesis test. A common level of significance is 0.05, which means that there is a 5% chance of making a Type 1 error.

  • For example, in a medical test to detect a disease, a Type 1 error would be a false positive, where the test says a patient has the disease, but they actually do not. Type 2 error, also known as a false negative, occurs when the null hypothesis is not rejected, but it is actually false. In other words, it is a mistake of not rejecting a null hypothesis that is false. The probability of making a Type 2 error is represented by the beta (beta). A common level of beta is 0.2, which means that there is a 20% chance of making a Type 2 error.
  • For example, in a medical test to detect a disease, a Type 2 error would be a false negative, where the test says a patient does not have the disease, but they actually do.

  1. Explain some of the Statistical test's use cases (Ex- 2 Tail test, T-Test, Anona test, Chi-Squared test).⭐

Ans: The use cases of the tests are as follows,

  • t-test: A t-test is used to determine if there is a significant difference between the means of two groups. There are several types of t-tests, including independent samples t-test, dependent samples t-test, and one-sample t-test. It is commonly used for comparing the means of two samples or for comparing a sample mean to a known population mean.
  • ANOVA (Analysis of Variance): ANOVA is used to determine if there is a significant difference between the means of two or more groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA. It is commonly used for comparing means of multiple samples or for comparing a sample mean to multiple known population means.
  • Chi-Square test: A Chi-Square test is used to determine if there is a significant association between two categorical variables. It is commonly used for testing independence in contingency tables and for goodness of fit tests.

  1. What do you mean when the p-values are high and low?.⭐

Ans:In hypothesis testing, the p-value is used to estimate the probability of obtaining a test statistic as extreme or more extreme than the one observed, assuming that the null hypothesis is true.

  • A low p-value (typically less than 0.05) indicates that the evidence against the null hypothesis is strong and that the null hypothesis is unlikely to be true given the data. In other words, a low p-value suggests that the sample data is unlikely to have occurred by chance alone and that the observed difference is statistically significant.
  • A high p-value (typically greater than 0.05) indicates that the evidence against the null hypothesis is weak and that the null hypothesis is likely to be true given the data. In other words, a high p-value suggests that the sample data is likely to have occurred by chance alone and that the observed difference is not statistically significant.

  1. What is the significance of KL Divergence in Machine Learning?.⭐⭐

Ans: KL divergence (also known as Kullback-Leibler divergence) is a measure of the difference between two probability distributions. In machine learning, KL divergence is used to measure the similarity or dissimilarity between two probability distributions, usually between the estimated distribution and the true distribution.

  • One of the main uses of KL divergence in machine learning is in the field of unsupervised learning, particularly in the training of generative models. For example, Variational Autoencoder(VAE) and Generative Adversarial Networks(GANs) use KL divergence as a loss function to measure the difference between the estimated distribution and the true distribution of the data.

  • KL divergence is a popular measure for comparing probability distributions because it is a non-symmetric measure, which means that the KL divergence between distribution P and Q is not the same as the KL divergence between Q and P. This makes it useful for comparing distributions that have different support sets. Additionally, it is a computationally efficient measure, and it is easy to calculate the gradient of KL divergence, which makes it suitable for optimization.

  • Some of the common use cases of KL Divergence include:

    • Compression: KL Divergence is used to determine the best compression scheme for a dataset by minimizing the information loss.

    • Machine learning: KL Divergence is used in variational inference for approximating posterior distributions in Bayesian models.

    • Reinforcement learning: KL Divergence is used to measure the difference between the current policy and the optimal policy.

    • Clustering: KL Divergence is used to compare different cluster assignments and evaluate the quality of clustering algorithms.

    • Model evaluation: KL Divergence can be used to compare different models and determine the best fit for a given dataset.


  1. How could you deal with data skewness? What are the approaches to resolve the skewness in the data?.⭐

Ans:Skewness is a measure of the asymmetry of a probability distribution.

  • When the skewness is positive, it indicates that the distribution has a long tail on the right side and the mode is less than the mean, which is also less than the median.
  • When the skewness is negative, it indicates that the distribution has a long tail on the left side and the mode is greater than the mean, which is also greater than the median.
  • There are several ways to deal with skewness in data:
    1. Data Transformation: Data transformations like logarithmic, square root, and reciprocal can help to reduce skewness in the data. Logarithmic transformation is generally used for data that is positively skewed, square root and reciprocal for data that is negatively skewed.
    2. Binning: Binning is a method of grouping continuous variables into a smaller number of categories. It can be used to reduce the effect of outliers and to make the data more symmetrical.

  1. What is IQR? How it is been used to detect Outliers?.⭐

Ans: IQR stands for interquartile range. It is a measure of the spread of a dataset that is based on the difference between the 75th percentile (Q3) and the 25th percentile (Q1) of the data. To calculate IQR, you first need to find the median (Q2) of the data and then subtract Q1 from Q3. IQR

  • Outliers can be detected using the IQR method by calculating the lower and upper bounds of the data. The lower bound is defined as Q1 - 1.5 * IQR, and the upper bound is defined as Q3 + 1.5 * IQR. Any data points that fall outside of this range are considered to be outliers.
  • It is important to note that while the IQR method is a useful tool for identifying outliers, it is not always the best method to use, especially when the data has a non-normal distribution. Other methods such as the Z-score method should also be considered.

  1. What are the algorithms that are sensitive & Robust to Outliers?.⭐

Ans: There are several algorithms that are considered to be robust to outliers:

  • The Median Absolute Deviation (MAD) method: This is a robust measure of statistical dispersion that is based on the median of the absolute deviations from the median, rather than the mean.
  • The Huber loss function: This is a robust cost function that is less sensitive to outliers than the mean squared error (MSE) function.
  • The RANSAC algorithm: This is a robust estimation method that is designed to fit a model to a dataset that contains outliers.
  • Random Forest: Random Forest creates multiple decision trees, and each tree is trained on different random subset of features and observations, it is less sensitive to outliers than a single decision tree.
  • Gradient Boosting Machine (GBM): GBM is also a collection of decision trees, it is less sensitive to outliers because it combines the decision of multiple weak learners.
  • Support Vector Machines (SVMs): SVMs are robust to outliers because they try to find the largest margin between the classes, it is less affected by the presence of outliers.

📙 Back to Top Section

  1. Why Feature Scaling is important? What are the feature scaling techniques?.⭐

Ans: Feature scaling is an important step in the preprocessing of data for machine learning algorithms because it helps to standardize the range of independent variables or features of the data.

  • MinMaxScaler: This class provides a transformer to scale the data set in a given range, usually between 0 and 1.
  • StandardScaler: This class provides a transformer to standardize the data set, by centering and scaling.
  • RobustScaler: This class provides a transformer to scale the data set using statistics that are robust to outliers.

  1. If we don't remove the highly correlated values in the dataset, how does it impact the model performance?.⭐⭐

Ans: If you don't remove high correlated values from your dataset, it can have a negative impact on the performance of your model.

  • Overfitting: The model will try to fit the noise in the data and this will result in poor generalization performance.
  • Instability: The model's parameters and coefficients may change dramatically with small changes in the training data.
  • Difficulty in interpreting the model: If two or more variables are highly correlated, it becomes difficult to interpret which variable is more important for the model.
  • Difficulty in model optimization: High correlation can lead to slow convergence of the model's optimization process, making it difficult to find the optimal solution.
  • Large variance and low bias: High correlation can lead to large variance and low bias in the model, which may result in an over-complicated model that is prone to overfitting.

Removing high correlated variables before training the model can help to improve the model's performance by removing multicollinearity and reducing the complexity of the model.


  1. What is Spearman Correlation? What do you mean by positive and negative Correlation?.⭐

Ans:Spearman correlation is a measure of the statistical dependence between two variables, it is also known as the Spearman rank correlation coefficient. It is a non-parametric measure of correlation, which means that it does not assume that the underlying distribution of the data is normal. Instead, it calculates the correlation between the ranks of the data points.

  • Spearman correlation coefficient ranges from -1 to 1. A coefficient of 1 indicates a perfect positive correlation, meaning that as one variable increases, the other variable also increases. A coefficient of -1 indicates a perfect negative correlation, meaning that as one variable increases, the other variable decreases. A coefficient of 0 indicates no correlation between the two variables.

  1. What is the difference between Co-Variance & Correlation?.⭐⭐

Ans: Covariance is a measure of the degree to which two random variables change together. It can be positive, negative or zero. A positive covariance means that the variables increase or decrease together, a negative covariance means that as one variable increases, the other variable decreases and a zero covariance means that there is no relationship between the two variables. The formula for covariance is: Cov(X, Y) = (1/n) * Σ(x - x̄) * (y - ȳ) where X and Y are the two random variables, x̄ and ȳ are the means of X and Y, respectively, and n is the number of observations.

  • Correlation, on the other hand, is a standardized version of covariance. It is a value between -1 and 1 that indicates the strength and direction of the linear relationship between two variables.The formula for correlation is: Corr(X, Y) = Cov(X, Y) / (σX * σY) where σX and σY are the standard deviations of X and Y, respectively. In summary, covariance is a measure of how two variables change together while correlation is a standardized version of covariance that describes the strength and direction of the linear relationship between two variables.

  1. What is the difference between Multiclass Classification Models & Multilabel Classification Models?.⭐

Ans: In multiclass classification, the goal is to classify instances into one of several predefined classes. For example, classifying images of animals into dog, cat, and horse classes. Each instance can only belong to one class, and the classes are mutually exclusive.

Multilabel classification, on the other hand, is a problem where each instance can belong to multiple classes simultaneously. For example, classifying news articles into multiple topics, such as sports, politics, and technology. In this case, an article can belong to the sports and politics class simultaneously.


  1. Can you explain the concept of ensemble learning?.⭐

Ans: Ensemble learning is a technique in machine learning where multiple models are combined to create a more powerful model. The idea behind ensemble learning is to combine the predictions of multiple models to create a final prediction that is more accurate and robust than the predictions of any individual model.


  1. What are the different ensembling Modeling Strategies in ML?.⭐

Ans: There are several ensemble learning techniques, including:

  • Bagging: This technique involves training multiple models independently on different subsets of the data and then averaging or voting on their predictions. This can be useful for reducing the variance of a model.
  • Boosting: This technique involves training multiple models sequentially, where each model is trained to correct the mistakes of the previous model. This can be useful for reducing the bias of a model.
  • Stacking: This technique involves training multiple models independently on the same data and then combining their predictions in a second-level model.
  • Blending: This technique is similar to stacking, but it involves training the second-level model on a subset of the data rather than on the predictions of the first-level models.

  1. How do you select features for a machine-learning model?.⭐

Ans: There are several feature selection algorithms used in machine learning, including:

  • Filter Methods: This approach is based on the correlation between the features and the target variable. It includes methods like correlation coefficient, mutual information and Chi-Squared.
  • Wrapper Methods: This approach is based on evaluating the performance of a subset of features in a model. It includes methods like forward selection, backward elimination, recursive feature elimination and genetic algorithms.
  • Embedded Methods: This approach is based on training the model with all the features and then selecting the relevant features based on their contribution to the performance of the model. It includes methods like Lasso regularization and Ridge regularization.
  • Hybrid Methods: This approach combines the strengths of filter and wrapper methods.
  • Mutual Information Feature Selection: This approach selects feature by maximizing the mutual information between the feature and the target variable.

  1. What are the dimensionality Reduction techniques in Machine Learning?.⭐⭐

Ans: There are several dimensionality reduction techniques in machine learning, including:

  • Principal Component Analysis (PCA)
  • Singular Value Decomposition (SVD)
  • Linear Discriminant Analysis (LDA)
  • Independent Component Analysis (ICA)
  • t-Distributed Stochastic Neighbor Embedding (t-SNE)
  • UMAP (Uniform Manifold Approximation and Projection)
  • Autoencoder (AE) 8. Variational Autoencoder (VAE) These techniques are used to reduce the number of features in a dataset while preserving as much information as possible.

  1. Explain the PCA steps in machine learning..⭐⭐

Ans:Principal Component Analysis (PCA) is a widely used dimensionality reduction technique in machine learning. It is a linear technique that transforms the original dataset into a new set of uncorrelated variables called principal components. The goal of PCA is to find the directions (principal components) that capture the most variation in the data. The following are the steps for performing PCA:

  • Data Preprocessing: PCA is performed on the centered data, thus the first step is to center the data by subtracting the mean from each feature.
  • Covariance Matrix: The next step is to calculate the covariance matrix of the centered data. The covariance matrix is a square matrix that contains the variances and covariances of the features.
  • Eigenvalues and Eigenvectors: Calculate the eigenvalues and eigenvectors of the covariance matrix. Eigenvectors are the directions of maximum variation in the data, and eigenvalues are the corresponding magnitudes of this variation.
  • Principal Components: Select the top k eigenvectors that have the highest eigenvalues, where k is the number of dimensions you want to reduce to. These eigenvectors are the principal components of the data.
  • Dimensionality Reduction: The last step is to project the centered data onto the principal components using a dot product. This results in a new dataset with reduced dimensions (k dimensions) that captures the most variation in the original data.`

  1. What are the types of Cross Validation techniques?.⭐⭐

Ans:Cross-validation is a technique used to evaluate the performance of a machine learning model by dividing the data into training and testing sets and evaluating the model on the testing set. There are several types of cross-validation techniques that can be used, including:

  • K-fold Cross-Validation: This method involves dividing the data into k equally-sized "folds" and training the model on k-1 of the folds and testing it on the remaining fold. This process is repeated k times, with a different fold used as the testing set each time. The final performance measure is the average performance across all k iterations.
  • Leave-One-Out Cross-Validation (LOOCV): This method is similar to k-fold cross-validation, but it is a special case where k is equal to the number of observations in the dataset. This means that each observation is used once as the testing set and all other observations are used as the training set.
  • Stratified K-fold Cross-Validation: This method is used when the data is imbalanced in terms of the target variable. It preserves the class balance across the folds, which can be important for models that are sensitive to class imbalance.
  • Time Series Cross-Validation: This method is used when working with time-series data. It involves using a sliding window to split the data into training and testing sets. This technique is useful to evaluate the model's ability to make predictions for future time points.
  • Repeated Random Subsampling: This method is similar to k-fold cross-validation, but it is a non-exhaustive version where the data is randomly split into training and test sets multiple times. This method is useful when data is limited or expensive to acquire.

  1. What are the trade-offs between the different types of Classification Algorithms? How would do you choose the best one?.⭐

Ans:Different types of classification algorithms have different strengths and weaknesses. The choice of algorithm depends on the specific characteristics of the data and the goal of the analysis. Here are some trade-offs between some common classification algorithms:

  • Logistic Regression: It's a simple and interpretable algorithm that works well for linearly separable data and can be extended to multi-class classification problems. However, it may not perform well on non-linearly separable data and it can be sensitive to outliers.
  • k-Nearest Neighbors (k-NN): It's a simple algorithm that is easy to understand and implement, and it can handle non-linearly separable data. However, it can be computationally expensive and it may not perform well on high-dimensional data.
  • Decision Trees: It's a simple and interpretable algorithm that can handle both numerical and categorical data, and it can also handle non-linearly separable data. However, it can be prone to overfitting and it may not perform well on high-dimensional data.
  • Random Forest: It's an ensemble method that combines multiple decision trees to improve the accuracy and reduce overfitting. However, it can be computationally expensive and it may not perform well on high-dimensional data.
  • Support Vector Machines (SVMs): It's a powerful algorithm that can handle non-linearly separable data and it is less sensitive to outliers. However, it can be computationally expensive and it may not perform well on high-dimensional data.

  1. How would you use the Naive Bayes classifier for categorical features? What if some features are numerical?.⭐⭐⭐

Ans:Naive Bayes is a probabilistic classifier that is based on the Bayes theorem, which states that the probability of a hypothesis (in this case, a class label) given some observations (in this case, feature values) is proportional to the probability of the observations given the hypothesis multiplied by the prior probability of the hypothesis.

  • When using Naive Bayes for categorical features, the feature values are treated as discrete, unordered values. For each feature, the model estimates the probability of each value given each class label. These probabilities are typically estimated from the training data using the maximum likelihood principle.
  • If some features are numerical, one common approach is to discretize the numerical features into bins, and then treat them as categorical features. This can be done by dividing the range of the numerical feature into a fixed number of intervals (bins) and assigning each observation to the interval to which it belongs.

  1. How does ROC curve and AUC value help measure how good a model is?.⭐⭐

Ans: The Receiver Operating Characteristic (ROC) curve and the Area Under the Curve (AUC) value are commonly used to evaluate the performance of binary classification models.

  • The ROC curve is a plot of the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. The TPR is the proportion of true positive cases that are correctly identified as such, while the FPR is the proportion of false positive cases that are incorrectly identified as true positives. A good classifier will have a high TPR and a low FPR, which corresponds to a point in the upper left corner of the ROC space.
  • The AUC value is the area under the ROC curve. It ranges from 0 to 1, with a value of 1 indicating a perfect classifier and a value of 0.5 indicating a random classifier. AUC measures the trade-off between the TPR and the FPR of a classifier. A model with a higher AUC is considered to be better at distinguishing between the positive and negative classes.
  • A ROC curve is useful when the distribution of the positive and negative classes is imbalanced. An AUC of 1, means that there is no overlap between the positive and negative distributions, while an AUC of 0.5 means that there's a perfect overlap between the two. AUC is also independent of the classification threshold, this means that it is insensitive to changes in the decision boundary of the classifier, making it a robust measure of model performance. roc

  1. Can you choose a classifier based on the size of the training set?.⭐⭐

Ans: The size of the training set can be a factor to consider when choosing a classifier, but it is not the only one. Here are some general guidelines on how the size of the training set can affect the choice of classifier:

  • Small training set: When the training set is small, it can be challenging to train a model with a high degree of complexity. In this case, simple and interpretable models such as logistic regression or decision trees may be more appropriate. Additionally, techniques such as k-fold cross-validation or bootstrapping can be used to generate more training data.

  • Large training set: When the training set is large, more complex models such as Random Forest, Neural Networks or Support Vector Machines (SVMs) can be used to take advantage of the additional data and improve the model's performance.

  • High-dimensional data: High-dimensional data can be challenging for some models such as k-Nearest Neighbors (k-NN) and decision trees, which may not perform well on high-dimensional data. In this case, models such as linear discriminant analysis (LDA), logistic regression or SVMs may be more appropriate.

  • Class imbalance: If the data set is imbalanced, models such as decision trees and random forests may be sensitive to class imbalance and perform poorly. In this case, models such as SVMs with appropriate kernel or cost-sensitive learning can be more appropriate.


📙 Back to Top Section

Regression Techniques ( Concepts )

  1. Can you explain the difference between simple linear regression and multiple linear regression? How do you decide which one to use for a given problem?.⭐⭐

Ans:Simple linear regression is a type of linear regression where the target variable is predicted using a single predictor variable. The relationship between the predictor variable and the target variable is represented by a straight line. The equation for a simple linear regression model is:

y = b0 + b1*x

Where y is the target variable, x is the predictor variable, b0 is the y-intercept, and b1 is the slope of the line.

Multiple linear regression, on the other hand, is a type of linear regression where the target variable is predicted using multiple predictor variables. The relationship between the predictor variables and the target variable is represented by a hyperplane. The equation for a multiple linear regression model is:

y = b0 + b1_x1 + b2_x2 + ... + bn*xn

Where y is the target variable, x1, x2, ..., xn are the predictor variables, b0 is the y-intercept, and b1, b2, ..., bn are the coefficients of the predictor variables.

  • When deciding which type of linear regression to use, you should consider the number of predictor variables and the relationship between the predictor variables and the target variable. If there is only one predictor variable and the relationship between the predictor variable and the target variable is approximately linear, then simple linear regression is a good choice. If there are multiple predictor variables and the relationship between the predictor variables and the target variable is approximately linear, then multiple linear regression is a good choice.

  1. What are the assumptions of linear regression and how do you check if they hold for a given dataset?.⭐

Ans:The assumptions of linear regression are:

  • Linearity: The relationship between the predictor variables and the target variable is linear. This means that a straight line can be used to approximate the relationship.
  • Independence of errors: The errors (residuals) of the model are independent and identically distributed. This means that the errors are not correlated and have the same distribution.
  • Homoscedasticity: The variance of the errors is constant across all levels of the predictor variables. This means that the spread of the errors is the same for all levels of the predictor variables.
  • Normality: The errors are normally distributed. This means that the distribution of the errors follows a normal (or Gaussian) distribution.
  • No multicollinearity: The predictor variables are not highly correlated with each other. This means that the predictor variables are independent of each other.

  1. How do you handle categorical variables in linear regression?.⭐

Ans:Categorical imputations are the techniques that can be performed before fitting into the model.

  • OneHot Encoding: One-hot encoding is a technique that converts a categorical variable into multiple binary variables, one for each category.
  • Ordinal Encoding: Ordinal encoding is a technique that assigns an integer value to each category of a categorical variable. This method is useful when the categories have an ordinal relationship.
  • Effect Encoding: Effect Encoding is a technique that represents each categorical variable as a set of contrasts between the different levels of that variable and a reference level.

  1. What is the difference between Lasso & Ridge regression?.⭐

Ans: Both Lasso and Ridge regression are types of linear regression, but they have different approaches to solving the problem of overfitting, which occurs when a model is too complex and captures the noise in the data as well as the underlying relationship.

  • Lasso (Least Absolute Shrinkage and Selection Operator) regression also known as L1 regularization, adds a penalty term to the cost function equal to the absolute value of the magnitude of the coefficients. Lasso Regression is useful to select important features of a dataset, it will shrink the less important feature's coefficient to zero and it's generally used where the number of features is high. lasso

  • Ridge regression, also known as L2 regularization, is another type of linear regression that uses a penalty term equal to the square of the magnitude of the coefficients. Instead of shrinking the coefficients to zero, this penalty term helps to shrink the coefficients of less important features towards zero, but it doesn't make them exactly zero. Ridge Regression is useful to handle multicollinearity problems by reducing the variance of the coefficients.

ridge


  1. How do we select the right regularization parameters?.⭐

Ans: Regularization is a technique used to prevent overfitting by adding a penalty term to the cost function of a model. The regularization term is controlled by a parameter, called the regularization parameter or hyperparameter. The value of this parameter determines the strength of the regularization.

  • There are several ways to select the right regularization parameter:
    • Grid Search: Grid search is a technique that involves training the model with a range of different regularization parameter values and evaluating the performance of each one using a validation set. The regularization parameter value that results in the best performance is chosen as the final value.
    • LassoCV: LassoCV is a variant of Lasso Regression that uses cross-validation to select the regularization parameter. It performs the grid search and cross-validation internally and returns the best regularization parameter.
    • RidgeCV: RidgeCV is a variant of Ridge Regression that uses cross-validation to select the regularization parameter. It performs the grid search and cross-validation internally and returns the best regularization parameter.

  1. Why is Logistic Regression called a Linear Model?.⭐

Ans: Logistic Regression is called a linear model because the relationship between the input features and the output variable is linear. The model is represented by a linear equation of the form:

y = b0 + b1_x1 + b2_x2 + ... + bn*xn

where y is the output variable (a probability between 0 and 1), x1, x2, ..., xn are the input features, and b0, b1, b2, ..., bn are the model coefficients. The coefficients represent the contribution of each feature to the output variable. However, it's important to note that the logistic regression model is used for classification problem, the output variable (y) is the probability of the input belongs to a certain class, and this probability is modeled by a logistic function (sigmoid function) which is non-linear.


  1. Can Logistic regression can be used in an Imbalanced dataset Problem?.⭐⭐

Ans: Logistic regression can be used in an imbalanced dataset problem, but it is important to be aware of the limitations and potential issues that can arise.

  • One potential issue with using logistic regression on imbalanced datasets is that the model may be biased towards the majority class. This means that the model will have a higher accuracy, but it will not perform well on the minority class. This can be addressed by using techniques such as oversampling the minority class, undersampling the majority class, or using a combination of both.
  • Another potential issue with using logistic regression on imbalanced datasets is that the model may not be sensitive to the minority class. This means that the model will have a low recall for the minority class. This can be addressed by adjusting the threshold for classifying an example as the minority class.

Additionally, it's important to use different evaluation metrics other than accuracy such as precision, recall, F1-score, AUC, etc.


  1. Can you explain the concept of polynomial regression? How does it differ from linear regression?.⭐⭐

Ans: Polynomial regression is a type of regression analysis in which the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial. A polynomial regression model can be represented by an equation of the form:

y = b0 + b1x + b2x^2 + b3x^3 + ... + bnx^n

where y is the output variable, x is the input variable, and b0, b1, b2, ..., bn are the model coefficients. The coefficients represent the contribution of each degree of x to the output variable.

  • The main difference between polynomial regression and linear regression is the form of the equation. Linear regression uses a linear equation of the form y = b0 + b1x*, while polynomial regression uses a polynomial equation that can model non-linear relationships between the input and output variables.

  1. How do you deal with outliers in linear regression? What are the most common techniques for identifying and dealing with outliers?.⭐⭐

Ans: Outliers in linear regression can have a significant impact on the model parameters and can lead to poor model performance. Here are some common techniques for identifying and dealing with outliers:

  • Visualization: One of the simplest ways to identify outliers is by visualizing the data using scatter plots, box plots, or histograms. Outliers will typically appear as points that are far away from the main cluster of data points.
  • Z-score: The Z-score is a measure of how many standard deviations an observation is from the mean. Observations with a Z-score greater than a certain threshold (usually 3) can be considered outliers.
  • Cook's distance: Cook's distance is a measure of the influence of each observation on the model parameters. Observations with a high Cook's distance can be considered outliers.
  • Outlier removal: Once the outliers are identified, they can be removed from the dataset. However, this method should be used with caution, as it can lead to loss of important information and can bias the model.
  • Robust regression: Robust regression is a technique that is less sensitive to outliers. The most common method of robust regression is the least median of squares (LMS) method.
  • Winsorization: Winsorization is a technique that replaces the outliers with a certain value. For example, replacing all the observations greater than 3 standard deviations from the mean with the value of 3 standard deviations from the mean.

  1. What are the most common evaluation metrics in Regression Problems?.⭐

Ans: There are several common evaluation metrics used in regression problems, some of the most popular include:

  • Mean Absolute Error (MAE)
  • Mean Squared Error (MSE)
  • Root Mean Squared Error (RMSE):
  • R-squared (R2):
  • Mean Absolute Percentage Error (MAPE):
  • Adjust R-squared (Adjusted R²)

  1. How R-squared is different with Adjusted R²? What is the main difference.⭐

Ans: R-squared (R²) and adjusted R-squared (adjusted R²) are both measures of the goodness of fit of a regression model, but they are slightly different.

  • R-squared is a measure of the proportion of variance in the dependent variable that can be explained by the independent variable(s) in the model. It ranges from 0 to 1, where a value of 1 indicates that the model perfectly explains the variance in the dependent variable. R-squared is calculated as the ratio of the explained variance to the total variance.
  • Adjusted R-squared is an extension of R-squared that adjusts for the number of predictor variables in the model. The main difference between R-squared and adjusted R-squared is that adjusted R-squared penalizes the model for the number of predictor variables that are not useful in explaining the variance in the dependent variable.
  • This makes adjusted R-squared a more conservative measure of the goodness of fit of a regression model, as it penalizes models with a large number of predictor variables that do not explain a significant amount of variance in the dependent variable. r2 Adjustedr2

  1. Can you explain the concept of Elastic Net regression? How does it differ from Ridge and Lasso regression?.⭐⭐

Ans:Elastic Net is a linear regression model that combines the properties of both Ridge and Lasso regression. Like Ridge, it adds a L2 regularization term to the cost function to prevent overfitting. Like Lasso, it adds a L1 regularization term to the cost function to perform feature selection. The trade-off between the L1 and L2 regularization terms is controlled by a parameter, alpha, that ranges from 0 to 1. When alpha = 0, the model becomes a Ridge regression, and when alpha = 1, it becomes a Lasso regression.


  1. In a sparse dataset where most of the values are 0, which supervised classification algorithm we should use?.⭐⭐

Ans:When dealing with a sparse dataset where most of the values are zero, a suitable supervised classification algorithm to use could be the Naive Bayes Classifier, especially the variant called Multinomial Naive Bayes, because it can handle the large number of zero values and it relies on counting the occurrences of the features, this method can work well when a dataset is sparse and the classifier can learn useful information from the occurrences of the features. It uses multinomial distributions for the likelihood estimates of the features, which models the occurrences which are robust to the sparse data.


  1. What are the regularization techniques other than L1 and l2 in machine learning?.⭐

Ans:Elastic Net Regularization: This is a combination of L1 and L2 regularization, where a linear combination of the L1 and L2 penalties is used in the cost function. It helps to balance the trade-off between sparsity and shrinkage.

  • Bayesian Regularization: This method is a Bayesian version of regularization, this method uses the probabilistic approach to regularize the model by assuming a prior distribution on the parameters, this will help to prevent overfitting by reducing the variance in the parameters.

  1. Which evaluation metrics are sensitive to the Outliers?.

Ans:Some evaluation metrics that are sensitive to outliers include mean absolute error, mean squared error. These metrics can be greatly affected by the presence of outliers in the data, as they take into account the individual differences between the predicted and actual values. On the other hand, median absolute error and coefficient of determination (R-squared) are resistant to the outliers.


  1. What is the difference between bagging and boosting?.⭐

Ans:Bagging and Boosting are two ensemble methods that are used to improve the performance of machine learning models.

  • Bagging (Bootstrap Aggregating) is a method that involves training multiple models on different subsets of the training data, and then averaging (for regression) or voting (for classification) the predictions made by each model. The subsets of the training data are created by randomly sampling the data with replacement. Bagging helps to reduce the variance of the model, making it less prone to overfitting.
  • Boosting is a method that involves training multiple models in a sequential manner, where each model tries to correct the errors made by the previous model. The idea is to give more weight to the data points that are misclassified by the previous models, so that the next model can focus on those points. Boosting helps to reduce the bias of the model, making it more accurate.

  1. What is the difference between bagging and boosting?.⭐

Ans:Bagging and Boosting are two ensemble methods that are used to improve the performance of machine learning models.

  • Bagging (Bootstrap Aggregating) is a method that involves training multiple models on different subsets of the training data, and then averaging (for regression) or voting (for classification) the predictions made by each model. The subsets of the training data are created by randomly sampling the data with replacement. Bagging helps to reduce the variance of the model, making it less prone to overfitting.
  • Boosting is a method that involves training multiple models in a sequential manner, where each model tries to correct the errors made by the previous model. The idea is to give more weight to the data points that are misclassified by the previous models, so that the next model can focus on those points. Boosting helps to reduce the bias of the model, making it more accurate.

  1. How does Decision Tree regressor works?.⭐

Ans:A Decision Tree Regressor is a supervised learning algorithm that is used to predict a continuous target variable based on several input features. The algorithm works by recursively splitting the data into subsets based on the values of the input features. Each split is made in such a way as to maximize the reduction in impurity of the target variable.


  1. Why don’t we use Mean Squared Error as a cost function in Logistic Regression?.⭐

Ans:In summary, MSE is a cost function for linear regression and it's not a suitable cost function for logistic regression because it's not a good measure of the difference between predicted probabilities (between 0 & 1) and true class labels. The log loss or cross-entropy loss is more appropriate cost function for logistic regression because it penalizes predictions that are confident but incorrect.


  1. How can we avoid Over-fitting in Logistic Regression models?

Ans:Regularization, Pruning, Cross-validation, Early stopping, and Ensemble methods are some of the techniques that can be used to avoid overfitting in logistic regression models.


  1. Explain Significance of Log Odds in Logistic regression with example.

Ans:In logistic regression, the log odds (also known as the logit) is a measure of the relationship between the independent variables and the dependent binary variable. The log odds is the natural logarithm (ln) of the odds of the dependent variable being a 1 (success) versus a 0 (failure).

For example, let's say we are trying to predict the likelihood of a customer purchasing a product based on their age and income. The log odds for a customer with an age of 30 and an income of $50,000 purchasing the product would be calculated as:

ln(P(purchase=1)/(1-P(purchase=1))) = ln(odds of purchasing)

Where P(purchase=1) is the probability of purchasing the product.

The significance of the log odds in logistic regression is that it allows us to model the probability of the binary outcome as a linear function of the independent variables. The log odds is a linear function of the independent variables in logistic regression, so it allows us to make predictions about the probability of the binary outcome (e.g., purchase or no purchase) based on the values of the independent variables (e.g., age and income).


  1. Between Linear regression and Random forest regression which model will perform better in Airbnb House price prediction and why?

Ans:It depends on the specific characteristics of the data and the problem at hand. Both linear regression and random forest regression can be used to predict house prices on Airbnb, but they have different strengths and weaknesses.

  • Linear regression is a simple and interpretable model that makes assumptions about the linearity of the relationship between the independent and dependent variables and the normality of the errors. It is a good choice when the relationship between the independent and dependent variables is linear and there are not too many independent variables or outliers in the data. Linear regression also assumes no multicollinearity, which means that independent variables should be independent of each other.
  • On the other hand, Random Forest is an ensemble method, which means it combines the predictions of multiple decision trees to improve the overall predictive performance. Random Forest is a good choice when the relationship between the independent and dependent variables is non-linear or when there are many independent variables or outliers in the data. Random Forest also handles well the multicollinearity.

📙 Back to Top Section

Classification Techniques ( Concepts )

Support Vector Machine (SVM)

  1. When would you use SVM vs Logistic regression?.⭐⭐⭐

Ans:Support Vector Machine (SVM) and Logistic Regression are both supervised learning algorithms that can be used for classification tasks. However, there are some key differences between the two that may influence which one you choose to use for a given problem.

  • Linear vs Non-linear decision boundary: SVM can learn non-linear decision boundaries by using the kernel trick, while logistic regression can only learn linear decision boundaries. This means that SVM is more suitable for problems where the data is not linearly separable, while logistic regression is more suitable for problems where the data is linearly separable.

  • Outliers: SVM is less sensitive to outliers, as it only depends on the support vectors (i.e., the data points that are closest to the decision boundary). Logistic regression, on the other hand, is more sensitive to outliers, as the model parameters are estimated based on the entire dataset.

  • Large dataset: SVM can be less efficient when dealing with large dataset because of the high computational cost of the algorithm. Logistic regression is computationally cheaper and can handle large dataset.

  • Multi-class classification: Logistic Regression can be easily extended to multi-class classification problems by training multiple binary classifiers and combining them, while SVM requires a different approach for multi-class classification.


  1. Why would you use the Kernel Trick?.⭐⭐⭐

Ans:When it comes to classification problems, the goal is to establish a decision boundary that maximizes the margin between the classes. However, in the real world, this task can become difficult when we have to treat with non-linearly separable data. One approach to solve this problem is to perform a data transformation process, in which we map all the data points to a higher dimension find the boundary and make the classification. That sounds alright, however, when there are more and more dimensions, computations within that space become more and more expensive. In such cases, the kernel trick allows us to operate in the original feature space without computing the coordinates of the data in a higher-dimensional space and therefore offers a more efficient and less expensive way to transform data into higher dimensions. There exist different kernel functions, such as:

  • linear,
  • nonlinear,
  • polynomial,
  • radial basis function (RBF), and
  • sigmoid. Each one of them can be suitable for a particular problem depending on the data.

  1. What is the Hinge Loss in SVM?.⭐⭐⭐

Ans:In Support Vector Machines (SVMs), the hinge loss is a commonly used loss function that is used to train the model to classify the data points correctly. The hinge loss is defined as the maximum of 0 and the difference between the true class label and the predicted class label. It is represented by the following equation:

hinge loss = max(0, 1 - y*(wx+b))

where y is the true class label (-1 or 1), wx+b is the predicted class label, and w and b are the model parameters.

  • The hinge loss function is designed to penalize the model when it makes a classification error. When the true class label and the predicted class label are not the same, the hinge loss will be positive and will increase as the difference between the two increases.
  • When the true class label and the predicted class label are the same, the hinge loss will be zero and there will be no penalty.
  • The hinge loss is a convex function which makes the optimization problem in SVM a convex optimization problem, which can be solved efficiently.

  1. Can you explain the concept of the kernel trick and how it relates to SVM?.⭐⭐⭐

Ans: The kernel trick is a technique used in Support Vector Machines (SVMs) to transform the input data into a higher-dimensional space, where a linear decision boundary can be found. This allows SVMs to model non-linear decision boundaries even though the optimization problem is solved in a linear space.

  • The kernel trick works by defining a kernel function, which is a mathematical function that takes in two input vectors and returns a scalar value. The kernel function is used to compute the inner product of the input vectors in a higher-dimensional space, without explicitly computing the coordinates of the input vectors in that space. This is why it is called the "kernel trick."

    • The most common kernel functions used in SVMs are:
      • Iinear kernel: It is defined as a simple inner product of two input vectors, which corresponds to a linear decision boundary.
      • Polynomial kernel: It is defined as the inner product of two input vectors raised to a power, which corresponds to a polynomial decision boundary.
      • Radial basis function (RBF) kernel: It is defined as the exponential of the negative Euclidean distance between two input vectors, which corresponds to a non-linear decision boundary. kernel

  1. Can you explain the concept of the soft-margin SVM and how it differs from the hard-margin SVM?.⭐⭐⭐

Ans: Support Vector Machines (SVMs) are a powerful algorithm for binary classification problems. The standard SVM algorithm is known as the hard-margin SVM, which aims to find the maximum-margin hyperplane, which is a decision boundary that separates the two classes with the greatest possible margin. A margin is defined as the distance between the decision boundary and the closest data points from each class, known as support vectors.

  • The soft-margin SVM allows for some misclassifications by introducing a slack variable for each data point that measures the degree of misclassification. This slack variable is controlled by a parameter called the regularization parameter. The goal of the soft-margin SVM is to find a decision boundary that maximizes the margin while minimizing the sum of the slack variables.
  • In the hard-margin SVM, all observations must be classified correctly, while in the soft-margin SVM, a certain number of observations can be misclassified, which is determined by the regularization parameter. If the regularization parameter is set to zero, the soft-margin SVM becomes a hard-margin SVM,while if the regularization parameter is set to a large value, the decision boundary becomes less sensitive to outliers. svm

  1. What are the components in Support Vector Machines.⭐⭐⭐

Ans:Support Vector Machine (SVM) is a supervised learning algorithm that can be used for classification and regression tasks. The SVM algorithm has several components, which include:

  • A set of input features (also called predictors or independent variables) and a target variable (also called a dependent variable).
  • A kernel function that maps the input features into a higher-dimensional space, where a linear boundary can be found. Common kernel functions include linear, polynomial, and radial basis function (RBF).
  • A decision boundary, which is a hyperplane that separates the different classes in the feature space. The decision boundary is chosen to maximize the margin, which is the distance between the decision boundary and the closest data points of each class.
  • Support vectors, which are the data points that are closest to the decision boundary and define the margin. The support vectors are the only data points that directly influence the decision boundary.
  • A slack variable, which is used to allow for misclassifications in the data. The slack variable is used to control the trade-off between the margin and the number of misclassifications.

  1. How does the choice of kernel function affect the performance of SVM classifier?.⭐⭐⭐

Ans: The choice of kernel function in Support Vector Machine (SVM) classifier has a significant impact on the performance of the model. The kernel function maps the input data into a higher-dimensional space, where a linear boundary can be found. Different kernel functions have different properties that make them more suitable for different types of data and tasks.

  • Linear Kernel: A linear kernel is the simplest kernel function, which maps the input data into a linear space. This kernel function is suitable for data that is linearly separable, but it may not perform well on data that is non-linearly separable.
  • Polynomial Kernel: A polynomial kernel maps the input data into a polynomial space. It can be useful for data that is not linearly separable. A polynomial kernel is sensitive to the degree of the polynomial, a high degree will lead to overfitting while a low degree will lead to underfitting.
  • Radial Basis Function (RBF) Kernel: The RBF kernel maps the input data into an infinite-dimensional space, where a linear boundary can be found. It is a popular choice for non-linearly separable data and it is less sensitive to the choice of parameters than the polynomial kernel.
  • Sigmoid kernel: The Sigmoid kernel maps the input data into a two-class sigmoid probability distribution. It is mainly used in two-class classification problems, and it is not suitable for multi-class problems.

Some Other variations of the Above Questions.

  1. How does SVM handle the case of non-linearly separable data?⭐⭐⭐
  2. How does the SVM algorithm handle multi-class classification problems?⭐⭐⭐
  3. How does one interpret the support vectors in an SVM classifier? ⭐⭐⭐
  4. How does the concept of margin maximization in SVM classifier relate to model interpretability?⭐⭐⭐
  5. How does the concept of kernel trick relate to the curse of dimensionality in SVM classifier?⭐⭐⭐

📙 Back to Top Section

Decision Tree Concepts

  1. How does a decision tree classifier work?.⭐

Ans:A decision tree classifier is a type of algorithm used for both classification and regression tasks. The basic idea behind a decision tree is to divide the feature space into smaller regions, called "leaves", by recursively partitioning the data based on the values of the input features.

  • The process of building a decision tree classifier starts with selecting the best feature to split the data at the root node. This selection is based on a metric such as information gain or Gini impurity.
  • Once the root node is selected, the data is split into subsets based on the values of the selected feature, and the process is repeated on each subset. For each subset, the best feature to split the data is chosen and the subset is further divided into smaller subsets. This process continues until a stopping criterion is met, such as a maximum tree depth or a minimum number of samples in a leaf node.

  1. What are the most common techniques for pruning a Decision Tree?.⭐⭐

Ans: Decision tree pruning is a technique used to reduce the complexity of a decision tree and prevent overfitting. Here are some common techniques for pruning a decision tree:

  • Reduced Error Pruning: This technique starts at the leaves of the tree and works its way up, removing branches that do not improve the accuracy of the tree. The tree is pruned by removing branches that do not decrease the classification error when removed.
  • Cost Complexity Pruning: This technique uses a parameter called complexity parameter (alpha) to control the trade-off between the tree's size and its accuracy. The tree is pruned by removing branches that do not decrease the classification error by more than a certain threshold.
  • Minimum Description Length (MDL) Pruning: This technique uses the principle of Occam's razor, which states that the simplest explanation is usually the best. The tree is pruned by removing branches that do not decrease the description length of the tree (a measure of the tree's complexity)
  • Pre-pruning: It is a technique that stops the tree-building process before it perfectly fits the training data. It prevents the tree from growing too deep and overfitting.
  • Post-pruning: It is a technique that starts with a fully grown tree and removes branches that do not improve the accuracy of the tree.

  1. How do you choose the optimal depth of a decision Tree?.⭐

Ans:Choosing the optimal depth of a decision tree is an important task in order to prevent overfitting and underfitting. There are several techniques to choose the optimal depth of a decision tree, here are some of the most common ones:

  • Pre-pruning
  • Post-pruning.
  • Cost Complexity Pruning: It is a technique that uses a parameter called complexity parameter (alpha) to control the trade-off between the tree's size and its accuracy.

  1. What is Entropy & Gini Impurity? What are the differences between them? Which one is faster in terms of computation?.⭐

Ans:Entropy and Gini impurity are two measures used to evaluate the quality of a split in a decision tree. They are used to quantify the amount of disorder or randomness in a set of data.

  • Entropy is a measure of impurity in a set of examples. It is defined as the sum of the negative of the probability of each class multiplied by the logarithm of the probability of that class.
  • Gini impurity is another measure of impurity used in decision tree algorithms. It is defined as the probability of misclassifying a randomly chosen example from a set of examples.
  • However, the calculation method is different. Entropy is calculated based on the probability of each class, while Gini impurity is calculated based on the probability of misclassifying a random example.
  • In terms of computation, Gini impurity is faster to compute than entropy because it does not require logarithmic calculations.

  1. What is the fundamental concept of ID3, CART, CHAID, C4.5 Algorithm?.⭐⭐⭐

Ans:ID3, CART, CHAID, and C4.5 are all decision tree algorithms used for classification and regression tasks. They are all based on the same fundamental concept of recursively partitioning the feature space into smaller regions, called "leaves", by selecting the best feature to split the data at each node. However, they differ in the way they select the best feature and the stopping criteria for the tree-building process.

  • ID3 (Iterative Dichotomizer 3) algorithm: It is a decision tree algorithm developed by Ross Quinlan in 1986. It uses information gain as the criterion to select the best feature to split the data at each node. ID3 is a greedy algorithm that selects the feature that maximizes the information gain at each step.
  • CART (Classification and Regression Trees) algorithm: It is a decision tree algorithm developed by Breiman et al. in 1984. It uses Gini impurity as the criterion to select the best feature to split the data at each node. CART is a binary tree algorithm that creates two children for each node.
  • CHAID (Chi-squared Automatic Interaction Detection) algorithm: It is a decision tree algorithm developed by Gordon Kass in 1980. It uses chi-squared test as the criterion to select the best feature to split the data at each node. CHAID is used to analyze categorical data and it allows more than two children for each node.
  • C4.5 : C4.5 is an extension of the ID3 algorithm, it is also based on the concept of information gain but it also takes into account the possibility of continuous-valued features and does pruning to avoid overfitting. C4.5 uses the gain ratio as a measure of feature importance, which is a normalized version of information gain. It also uses a threshold value to determine when to stop growing the tree.

  1. What is Information gain in a Decision tree?.⭐

Ans:Information gain is a measure used in decision tree algorithms to evaluate the quality of a split in the data. It measures the reduction in impurity or uncertainty in a set of examples after a split is made based on a specific feature. It is calculated as the difference between the entropy of the original set of examples and the weighted average of the entropy of the subsets of examples created by the split.


  1. What are the main hyperparameters in Decision Tree.⭐⭐

Ans:There are several hyperparameters that can be adjusted in decision tree algorithms, here are some of the main ones:

  • Maximum depth: This parameter controls the maximum depth of the tree. A tree with a larger maximum depth will have more branches and will be able to model more complex relationships in the data, but it also increases the risk of overfitting.
  • Minimum samples per leaf: This parameter controls the minimum number of samples required to form a leaf node. A tree with smaller minimum samples per leaf will have more branches, but it also increases the risk of overfitting.
  • Minimum samples per split: This parameter controls the minimum number of samples required to form a split. A tree with smaller minimum samples per split will have more branches, but it also increases the risk of overfitting.
  • Maximum number of features: This parameter controls the maximum number of features that can be considered when looking for the best split. By default, the decision tree will consider all the features, but if the number of features is high, the algorithm might be slow.
  • Criterion: This parameter controls the impurity measure used to evaluate the quality of a split. The most common ones are Gini impurity and information gain.

  1. How do we handle categorical variables in decision trees?.⭐⭐⭐

Ans: Using different Encoding Strategies.

  • One-hot encoding
  • Binary Encoding
  • Ordinal Encoding
  • Directly using Categorical variables: Some decision tree algorithm like C4.5, C5.0 and CART are able to handle categorical variables natively and don't need to convert them into numerical variables. They create a separate branch for each category of the variable.

  1. What is the difference between the OOB score and the validation score?

Ans:OOB (Out-of-Bag) score and validation score are two different ways to evaluate the performance of a decision tree algorithm.

  • The validation score is the performance of the model on a held-out dataset, which is a dataset that is separate from the training dataset.** The validation dataset is used to evaluate the performance of the model and to tune the hyperparameters. The most common validation techniques are k-fold cross-validation and holdout validation.
  • OOB score, on the other hand, is a measure of the model's performance that is calculated using the samples that are not used in the training of each decision tree. In bagging-based ensembles like random forests, some samples are not used in the training of each tree. These samples are called out-of-bag samples, and their predicted classes are used to calculate the OOB score.

  1. What are the most important hyperparameters in XGBoost Algorithm.⭐⭐⭐

Ans:XGBoost (eXtreme Gradient Boosting) is an ensemble learning algorithm that is commonly used for classification and regression tasks. The algorithm has several hyperparameters that can be adjusted to optimize the performance of the model. Here are some of the most important hyperparameters in XGBoost:

  • Learning rate (eta): This parameter controls the step size at which the algorithm learns from the data. A smaller learning rate will require more boosting rounds to reach the same level of performance, but it also reduces the risk of overfitting.
  • Maximum depth (max_depth): This parameter controls the maximum depth of the decision tree, which affects the complexity of the model. A larger maximum depth will result in a more complex model, but it also increases the risk of overfitting.
  • Number of estimators (n_estimators): This parameter controls the number of boosting rounds to be executed. A larger number of estimators will result in a more complex model, but it also increases the risk of overfitting.
  • Subsample (subsample): This parameter controls the fraction of the training data used to fit each decision tree. A smaller subsample will result in a more complex model, but it also increases the risk of overfitting.
  • Column subsample (colsample_bytree): This parameter controls the fraction of the features used in each decision tree. A smaller column subsample will result in a more complex model, but it also increases the risk of overfitting.
  • Regularization (lambda, alpha): These parameters control the trade-off between the complexity of the model and the risk of overfitting. Larger values of lambda and alpha will result in a simpler model, but it also reduces the risk of overfitting.

  1. How does the decision tree algorithm handle imbalanced datasets and what are the techniques to tackle it?.⭐⭐⭐

Ans: Decision tree algorithms can handle imbalanced datasets by modifying the criteria used for splitting the data. With imbalanced datasets, a majority class can easily dominate the decision tree and make it less sensitive to the minority class. There are several techniques that can be used to tackle imbalanced datasets in decision tree:

  • Cost-sensitive learning: This method assigns different misclassification costs to different classes, so that the decision tree algorithm gives more importance to the minority class.
  • Synthetic data generation: This method generates synthetic samples of the minority class to balance the dataset. Common techniques include SMOTE (Synthetic Minority Over-sampling Technique) and ADASYN (Adaptive Synthetic Sampling).
  • Pruning: This method removes branches of the decision tree that have low importance or low accuracy. This can help to reduce the overfitting of the majority class and improve the performance for minority class.
  • Ensemble methods: This method combines multiple decision tree models to improve the performance of the minority class. The most common ensemble method is Random Forest, which generates multiple decision trees and uses the majority vote for prediction.
  • Using different evaluation metrics: Instead of using accuracy, precision, recall, F1-score, or G-mean are more appropriate for imbalanced datasets.

Some Other variations of the Above Questions.

  1. How does the concept of random forests relate to decision tree and how does it improve performance?⭐⭐⭐
  2. Can you discuss the use of decision tree for regression problems and the differences with classification tasks?⭐⭐⭐
  3. Can you discuss the interpretability of decision tree models and how it is related to the depth of the tree? ⭐⭐⭐
  4. How does the concept of margin maximization in SVM classifier relate to model interpretability?⭐⭐⭐
  5. How does the concept of kernel trick relate to the curse of dimensionality in SVM classifier?⭐⭐⭐

📙 Back to Top Section

Boosting Algorithms (GBM, LightGBM, CatBoost)

  1. What are the key two principles of LightGBM?.⭐

Ans: LightGBM is a gradient boosting framework that uses tree-based learning algorithms. The key two principles of LightGBM are:

  • Gradient-based One-Side Sampling (GOSS): This principle involves only using a subset of the data to grow each tree, which reduces the computation time and memory usage. The samples are selected based on the gradient of the loss function, where the data points that have a larger absolute gradient are given a higher probability of being selected.
  • Exclusive Feature Bundling (EFB): This principle involves constructing a tree in a single pass, where all the splits are made on the same feature. This improves the speed of the algorithm by reducing the number of splits that need to be considered. This can also increase the accuracy of the model by reducing the number of splits that are made on noisy features.

  1. 1. How does Gradient Boosting algorithm differ from Random Forest algorithm?.⭐

Ans:Gradient Boosting and Random Forest are both ensemble learning techniques that combine multiple weak models to create a strong model. However, there are several key differences between the two algorithms:

  • Learning Method: Random Forest is an ensemble of decision trees that are independently grown, whereas Gradient Boosting is an ensemble of decision trees that are grown sequentially. Gradient Boosting trains each tree to correct the mistakes of the previous tree.
  • Performance: Gradient Boosting generally produces more accurate models than Random Forest, but it also tends to be more prone to overfitting. Random Forest is less prone to overfitting but the models it produces are less accurate.
  • Feature Importance: Random Forest calculates feature importance by averaging the decrease in impurity over all the trees in the forest, whereas Gradient Boosting does not have a built-in feature importance calculation method.

  1. 1. How does the learning rate parameter in Gradient Boosting algorithm affect the performance of the model?.⭐ Ans:The learning rate parameter in Gradient Boosting algorithm affects the performance of the model by controlling the step size at which the algorithm moves in the direction of the negative gradient of the loss function. A smaller learning rate will result in more accurate model but it will take longer to train, while a larger learning rate will result in less accurate model but it will train faster. The best value of learning rate parameter is often found by using a grid search or by using a learning rate schedule, where the learning rate is decreased over time as the algorithm approaches convergence.

  1. What are most important Hyperparameters in GBM.⭐

Ans:Gradient Boosting Machine (GBM) has several important hyperparameters that can affect the performance of the model. The most important hyperparameters in GBM are:

  1. min_samples_split
    • Defines the minimum number of samples (or observations) which are required in a node to be considered for splitting.
  2. min_samples_leaf
    • Defines the minimum samples (or observations) required in a terminal node or leaf.
    • Generally lower values should be chosen for imbalanced class problems because the regions in which the minority class will be in majority will be very small.
  3. max_depth
    • The maximum depth of a tree.
    • Used to control over-fitting as higher depth will allow model to learn relations very specific to a particular sample.
    • Should be tuned using CV.
  4. learning_rate (Boosting Hyperparameter)
    • n_estimators

  1. 1. How does the concept of Gradient Boosting relate to boosting theory and how it improves the performance of weak learners?.⭐

Ans:Gradient Boosting is a machine learning technique that is based on the concept of boosting, which is a method for combining multiple weak learners to create a strong learner. The key idea behind boosting is to iteratively train weak models and combine their predictions in a way that improves the overall performance of the model.
In Gradient Boosting, the weak learners used are decision trees and the algorithm trains them in a sequence, where each tree tries to correct the mistakes of the previous tree.


  1. What are the key principles of Cat Boost Classifiers?.⭐

Ans:CatBoost is a gradient-boosting framework that is specifically designed to handle categorical variables. The key principles of CatBoost are:

  • Categorical feature handling: CatBoost can handle categorical variables without the need for one-hot encoding, which can decrease the dimensionality of the data and improve the model's performance.
  • Handling missing values: CatBoost can handle missing values in the data by creating a special "NA" category for each categorical feature.
  • Handling overfitting: CatBoost uses a technique called "permutation importance" to identify and remove features that are not informative or are causing overfitting.
  • Handling data leakage: CatBoost has a built-in mechanism to detect and prevent data leakage, which can occur when training data is used to make test predictions.
  • Handling class imbalance: CatBoost has a built-in mechanism to handle class imbalance, which can occur when one class is much more prevalent than the other.

📙 Back to Top Section

Naive Bayes Classifier

  1. How does the assumption of independence between features affect the performance of Naive Bayes classifier?.⭐

Ans:The assumption of independence between features in the Naive Bayes classifier affects the performance of the classifier in two main ways:

  • Simplifies the calculations: By assuming independence between features, the classifier can calculate the probability of a class given a set of features by simply multiplying the individual probabilities of each feature. This simplifies the calculations and makes the classifier computationally efficient.
  • Decreases accuracy: However, assuming independence between features is often unrealistic, particularly in many real-world problems. If the features are not truly independent, the classifier will make incorrect predictions. For example, in text classification, the presence of certain words in a document may imply the presence of other words. Ignoring this relationship by assuming independence can lead to a decrease in accuracy.
  • Overconfidence: As the classifier is assuming independence between features, it may over-estimate the correlation between features and the class, leading to overconfidence in its predictions.

  1. Why Naive Bayes Classifier is called "Naive".⭐

Ans:Naive Bayes Classifier is called "naive" because it makes the assumption that the features are independent of each other, which is often unrealistic, but it allows the classifier to make predictions based on the individual probabilities of each feature, rather than considering the relationships between the features. This simplifies the calculations and makes the classifier computationally efficient, but it can lead to a decrease in accuracy if the features are not truly independent.


  1. How does the choice of prior probability distribution affect the performance of Naive Bayes classifier?

Ans :The choice of prior probability distribution in the Naive Bayes classifier can affect the performance of the model in several ways:

  • Bias: The prior probability distribution is used to estimate the probability of a class given a set of features. If the prior probability distribution is biased, the classifier will make biased predictions.
  • Overfitting: If the prior probability distribution is set to be very high for a specific class, the classifier may fit too well to the training data and may not generalize well to new data.
  • Underfitting: On the other hand, if the prior probability distribution is set to be very low for a specific class, the classifier may not fit well to the training data and may not capture the underlying patterns in the data.
  • Data Imbalance: If the data set has an imbalanced distribution of classes, the prior probability distribution can be set to reflect the class distribution in the data set in order to improve the classifier's performance.

  1. How does the Naive Bayes classifier perform in the presence of irrelevant features?

Ans :In summary, since the Naive Bayes classifier only considers the individual feature probabilities and not the relationships between features, irrelevant features will not affect its predictions. Irrelevant features will not change the prior probability of the class or the probability of the features given the class, and therefore will not affect the probability of the class given the features.


  1. 1. How does the concept of Laplace smoothing in Naive Bayes classifier improve the model's performance?

Ans:Laplace smoothing, also known as add-k smoothing, is a technique used in Naive Bayes classifier to improve the model's performance. The main idea behind Laplace smoothing is to avoid zero probabilities, which can cause the model to make incorrect predictions.

  • The problem with zero probabilities arises when the classifier encounters a new feature or class for which there is no training data.
  • Laplace smoothing solves this problem by adding a small constant k to the numerator and denominator of the probability estimates. This ensures that no probability is zero, and the classifier can still make predictions even when it encounters new features or classes.
  • For example, in the case of text classification, if a word is present in a document but not in the training set, the probability of that word given a class would be zero, by adding a small constant k to the numerator and denominator of the probability estimate. It ensures that no probability is zero and the classifier can still make predictions even when it encounters new features or classes.

📙 Back to Top Section

Stats & Probablity Fundamentals

  1. In any 15-minute interval, there is a 20% chance that you see at least one shooting star, What is the probability that you see at least one shooting start in the next 60 minutes?

Ans:The probability of seeing at least one shooting star in the next 60 minutes can be found by using the complement rule. The complement rule states that the probability of an event happening is equal to 1 minus the probability of it not happening.

  • In this case, the probability of not seeing a shooting star in any 15-minute interval is (1 - 0.20) = 0.80. Therefore, the probability of not seeing a shooting star in the next 60 minutes, if we assume the intervals are independent, is (0.8)^4 = 0.4096.
  • Now, the probability of seeing at least one shooting star in the next 60 minutes is 1 - 0.4096 = 0.5904.

  1. Find the Probability of Getting 53 Sundays in a Non Leap Year..

Ans: A non-leap year has 365 days, and 52 full weeks, which is 364 days. Therefore, there is one day left over that is not part of a full week. Since there are 7 days in a week, we can assume that the remaining day can be any one of the 7 days of the week.

  • As a result, there are 52 Sundays in a non-leap year. But one leftover day apart from those 52 weeks can be either a Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, or a Sunday.

  • Probability = (Number of favourable outcomes /Total number of events)

    Here, Numberfavorableable outcomes = 1

    And, Total number of events are = 7

    So, Probability = 1/7


  1. Types of Selection bias.

Ans:

  • Sampling bias: occurs when a sample is not representative of the population it is intended to represent.
  • Time interval bias: occurs when the study population is not observed for the same amount of time.
  • Location bias: occurs when the study population is not selected from the same geographic location.
  • Recruitment bias: occurs when the study population is not recruited in the same way.
  • Information bias: occurs when the data collected is not accurate or complete.
  • Confirmation bias: occurs when the researcher only looks for data that confirms their preconceived notions.
  • Survivorship bias: occurs when only the individuals who survive or succeed are studied, while those who fail or drop out are ignored.
  • Observer bias: occurs when the researcher's own beliefs or expectations influence the results of the study.

  1. How many types of probability distributions are there with examples?.

Ans: There are many types of probability distributions, and the specific types and examples can vary depending on the field or context in which they are being used. Some common types of probability distributions includd:

  • Bernoulli distribution: describes the probability of a binary outcome, such as the outcome of a coin flip.
  • Binomial distribution: describes the probability of a certain number of successes in a fixed number of trials, such as the probability of getting 4 heads in 8 coin flips.
  • Poisson distribution: describes the probability of a given number of events occurring in a fixed interval of time or space, such as the number of car accidents on a particular stretch of road in a given month.
  • Normal distribution (or Gaussian distribution): describes the probability of a continuous variable, such as a person's height or intelligence, taking on different values.
  • Exponential distribution: describes the probability of the time between events in a Poisson process, such as the time between customers arriving at a store.
  • Uniform distribution: describes the probability of a variable taking on any value within a given range, such as the probability of rolling a number between 1 and 6 on a fair die.
  • Log-normal distribution: a continuous probability distribution of a random variable whose logarithm is normally distributed.
  • Chi-square distribution: a probability distribution that is used in the common chi-square goodness of fit test and in confidence interval estimation for a population standard deviation of a normal distribution from a sample

  1. What are the types of sampling?.

Ans: There are several types of sampling techniques, including:

  • Simple Random Sampling: Every member of the population has an equal chance of being selected.
  • Systematic Sampling: Members of the population are selected at regular intervals.
  • Stratified Sampling: The population is divided into subgroups (strata) and a random sample is taken from each subgroup.
  • Cluster Sampling: The population is divided into groups (clusters) and a random sample of clusters is selected.
  • Multistage Sampling: A combination of any of the above sampling methods is used.
  • Convenience Sampling: Participants are selected based on their willingness and availability.
  • Quota Sampling: Sampling is done based on predefined quotas for different subgroups within the population.
  • Snowball Sampling: Participants recruit other participants that meet certain criteria.

  1. Difference between sample size and margin of error..

Ans: Sample size and margin of error are two important concepts in statistical sampling.

  • Sample size refers to the number of observations or individuals that are included in a sample. A larger sample size generally leads to a more precise estimate of the population parameter of interest, as it reduces the variability of the sample estimate. However, increasing the sample size also increases the cost and resources required to collect the data.
  • Margin of error, on the other hand, refers to the amount of error or uncertainty that is associated with a sample estimate. It is often reported as a percentage or a decimal and it represents the range of values that is likely to include the true population parameter with a certain level of confidence. The margin of error is affected by the sample size, the level of confidence, and the variability of the population. A smaller margin of error indicates a more precise estimate.
  • In summary, sample size is the number of observations in a sample, while margin of error is the level of precision or uncertainty of the sample estimate. The relationship between sample size and margin of error is inversely proportional, meaning as the sample size increases, the margin of error decreases.

📙 Back to Top Section

Deep Learning Fundamentals

CNN Fundamentals ( Cost Function, Backpropagation)

  1. How do you handle overfitting in deep learning models? Can you discuss various techniques such as early stopping, dropout, and regularization, and when it would be appropriate to use each?.⭐

Ans:Overfitting in deep learning models occurs when a model is trained too well on the training data, and as a result, it performs poorly on unseen data. This happens because the model has learned the noise in the training data, rather than the underlying pattern.

  • There are several techniques that can be used to handle overfitting in deep learning models, including:
  • Early stopping: This technique involves monitoring the performance of the model on a validation set during training, and stopping the training process when the performance on the validation set starts to decrease. This helps to prevent the model from continuing to overfit to the training data.
  • Dropout: This technique involves randomly setting some of the neurons in the network to zero during each iteration of training. This helps to prevent the model from relying too heavily on any one neuron or feature, which can reduce overfitting.
  • Regularization: This technique involves adding a penalty term to the loss function that the model is trying to optimize. This penalty term discourages the model from having large weights, which can help to reduce overfitting. Some examples of regularization techniques include L1 and L2 regularization, and weight decay.

  1. How do Neural Networks get the optimal Weights and Bias values?.⭐

Ans:

  • The neural networks get the optimal weights and bias values through an Error Gradient. error_gradient

  • To decide whether to increase or decrease the current weights and bias, it needs to be compared to the optimal value. This is found by the gradients of error with respect to weights and bias:

  • The gradient value is calculated from a selected algorithm called backpropagation.

  • An optimization algorithm utilizes the gradient to improve the weight values and bias. backpropagation


  1. What is the difference between loss and cost function in Deep Learning?.⭐

Ans:

  • Loss Function : A loss function is a function that measures the difference between the predicted output of a model and the true output. The goal of training a deep learning model is to minimize the loss, so that the predicted output of the model is as close as possible to the true output. loss_function

  • A cost function, on the other hand, is a function that measures the performance of a model, taking into account both the loss and any additional constraints or regularization. The cost function is used to optimize the model's parameters during training. The cost function can be represented as the sum of the loss function and the regularization term. The cost function measures the model’s error on a group of objects, whereas the loss function deals with a single data instance. cost_function


  1. What are the roles of an Activation Function?.⭐

Ans:

  • Activation Functions help in keeping the value of the output from the neuron restricted to a certain limit as per the requirement. If the limit is not set then the output will reach very high magnitudes. Most activation functions convert the output from -1 to 1 or to 0 to 1.
  • The most important role of the activation function is the ability to add non-linearity to the neural network. Most of the models in real-life are non-linear so the activation functions help to create a non-linear model.
  • The activation function is responsible for deciding whether a neuron should be activated or not.

  1. What's the difference between Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) and in which cases would use each one?.⭐

`Ans: Convolutional neural nets apply a convolution to the data before using it in fully connected layers.

  • They are best used in cases where you want positional invariance, that is to say, you want features to be captured regardless of where they are in the input sample.
  • CNNs are used primarily for image and video processing tasks, such as image recognition, object detection, and image segmentation. They are designed to take advantage of the spatial structure of image data by using convolutional layers, which scan the image with small filters to detect patterns and features.

RNNs, on the other hand, are used for tasks involving sequential data, such as natural language processing and speech recognition. They are designed to handle data with temporal dependencies by using recurrent layers, which allow information to be passed from one time step to the next. RNNs can also be used in tasks such as language modeling, machine translation, and image captioning.


  1. What are the steps in CNN and explain the significance of each steps..⭐

Ans: The steps in a Convolutional Neural Network (CNN) typically include the following:

  • Convolution: This is the process of scanning the input image with small filters (also known as kernels) to detect patterns and features in the image. The filters are typically smaller than the input image and are moved across the image in a sliding window fashion. This step is significant as it allows the network to learn the important features from the input image, which can be used for image classification or other tasks.
  • Non-Linearity (Activation Function): After the convolution step, the output is passed through an activation function such as ReLU, which adds non-linearity to the network. This step is significant as it allows the network to learn complex, non-linear relationships in the data.
  • Pooling: This step is used to reduce the spatial dimensions of the data while preserving important features. The most common types of pooling are max pooling and average pooling. This step is significant as it helps to reduce the computation cost and prevent overfitting.
  • Normalization: This step is used to standardize the data and helps to improve the overall performance of the network. Batch normalization is a common technique used for this purpose.
  • Fully Connected Layers: After the convolutional, pooling and normalization layers, the output is passed through one or more fully connected layers. This step is significant as it allows the network to make a final prediction based on the learned features.
  • Softmax function: In the case of classification problem the output of the fully connected layers is passed through a softmax function to obtain the final class probabilities.
  • Backpropagation: After the forward pass, the error is calculated and backpropagated through the network to update the weights.
  • Training: The network is trained using a dataset, and the weights are updated using an optimization algorithm such as Gradient Descent.
  • Evaluation: The performance of the network is evaluated on a test set to measure the overall accuracy of the network on unseen data. cnn

  1. Types of optimization Algorithms in CNN and their pros and cons..⭐

Ans: There are several types of optimization algorithms that can be used to train a convolutional neural network (CNN), each with its own pros and cons. Some commonly used optimization algorithms include:

  1. Stochastic Gradient Descent (SGD): This is a simple and widely used optimization algorithm that updates the weights by taking the gradient of the loss function with respect to the weights. One of the main advantages of SGD is that it is computationally efficient and easy to implement. However, it can be sensitive to the learning rate and may get stuck in local minima.

  2. Momentum: This is an extension of SGD that incorporates a momentum term to help the algorithm overcome local minima and converge faster. Momentum helps the optimization to continue in the direction of the previous steps, reducing the chances of getting stuck in a local minimum.

  3. Nesterov Accelerated Gradient (NAG): This is another variation of SGD that uses the momentum to estimate the next position of the weights, and then corrects it. This method is less sensitive to the learning rate and allows to reach the optimal solution faster than SGD and Momentum.

  4. Adagrad: This algorithm adapts the learning rate for each parameter individually, which can lead to faster convergence for some problems. However, since it uses a different learning rate for each parameter, it can lead to some parameters being updated more frequently than others, which can cause issues in some cases.

  5. Adadelta: This algorithm is an extension of Adagrad that tries to overcome its issue by using an average of the historical gradient updates to determine the learning rate for each parameter.

  6. Adam: Adaptive Moment Estimation (Adam) is an optimization algorithm that combines the advantages of Adagrad and Momentum. It uses moving averages of the parameters to provide a running estimate of the second raw moments of the gradients; the term adaptive in the name refers to the fact that the algorithm adapts the learning rates of each parameter based on the historical gradient information.

  7. RMSprop: This algorithm is similar to Adadelta and Adam, it also uses moving averages of the gradient updates to determine the learning rate for each parameter, but it uses the root mean square of the gradients instead of the sum of the gradients.

  8. Nadam: This algorithm is an extension of Adam and Nesterov Momentum.


  1. Types of Activation Function and their output range in CNN..⭐

Ans: Activation functions are used in the neurons of a convolutional neural network (CNN) to introduce non-linearity into the network. There are several types of activation functions, each with its own properties and output range, some commonly used activation functions include:

  1. Sigmoid: This function maps any input value to a value between 0 and 1. It's mostly used in the output layer of binary classification problems.

  2. ReLU (Rectified Linear Unit): This function maps any input value less than 0 to 0 and any input value greater than 0 to the same value. It's widely used in the hidden layers of the network.

  3. Tanh (Hyperbolic Tangent): This function maps any input value to a value between -1 and 1. It's similar to the sigmoid function but outputs values in a wider range.

  4. Leaky ReLU: This function is similar to ReLU, but instead of mapping negative input values to 0, it maps them to a small positive value (e.g. 0.01). This helps to avoid the "dying ReLU" problem, where a neuron can become "dead" and output zero for all input values.

  5. ELU (Exponential Linear Unit): This function is similar to Leaky ReLU, but the negative part of the function has a slope of alpha, where alpha is a hyperparameter. This helps to avoid the "dying ReLU" problem and also helps to speed up the convergence.

  6. Softmax: This function is used in the output layer of multi-class classification problems. It maps any input values to a probability distribution over n classes, where n is the number of classes. sigmoid


  1. What is dying ReLu Problem?.⭐

Ans: The "dying ReLU" problem is a phenomenon that can occur in convolutional neural networks (CNNs) when using the rectified linear unit (ReLU) activation function. The ReLU activation function maps any input value less than 0 to 0, which can cause some neurons to "die" and output zero for all input values. This happens because if the weights in the network are not properly initialized or if the learning rate is too high, it is possible for the weights to update in such a way that the input to the ReLU activation function becomes negative for all training examples.


  1. What are the different cost functions for binary and multiclass classification problem?.⭐

Ans:

  • Binary Classification
    • In binary classification, a common cost function used is the binary cross-entropy loss. It is also known as log loss. The binary cross-entropy loss measures the dissimilarity between predicted probability and the true label.
    • Another cost function used for binary classification is the hinge loss or the maximum-margin loss.
  • Multiclass Classification
    • In multiclass classification, a common cost function used is the categorical cross-entropy loss. It is also known as the negative log likelihood loss. It measures the dissimilarity between the predicted probability and the true label.
    • Another cost function used for multiclass classification is the sparse categorical cross-entropy loss. It is used when the classes are mutually exclusive, and only one class can be the correct one. cross_entropy

  1. What is Vanishing & Exploding Gradient problem in CNN.⭐

Ans: The vanishing and exploding gradient problem is a common issue that can occur in deep neural networks, including convolutional neural networks (CNNs).

  • The vanishing gradient problem occurs when the gradients of the parameters become very small during the training process. This can happen when the activation function used in the network is a saturating function, such as the sigmoid or hyperbolic tangent, which can squash the gradients to very small values.
  • The exploding gradient problem occurs when the gradients of the parameters become very large during the training process. This can happen when the parameters are not properly initialized, or when the gradients are not properly scaled. When the gradients become very large, the optimization algorithm can make large updates to the parameters, causing the network to become unstable and diverge.

  1. How do you approach transfer learning in deep learning, and when would you use a pre-trained model versus training from scratch?.⭐

Ans:Transfer learning is a technique in deep learning that allows a model that has been trained on one task to be used as a starting point for a new task. This can be done by using the pre-trained weights of a model that has already been trained on a large dataset, and then fine-tuning the model on the new task using a smaller dataset.

  • There are several ways to approach transfer learning in deep learning, depending on the specific task and the available resources. Some common approaches include:
    • Using a pre-trained model as a feature extractor: In this approach, the pre-trained model is used to extract features from the new task data, and a new classifier is trained on top of the extracted features. This is useful when the new task data is small and there is not enough data to train a model from scratch.
    • Fine-tuning a pre-trained model: In this approach, the pre-trained model is used as a starting point, and the weights of the model are further trained on the new task data. This is useful when the new task data is similar to the data that the pre-trained model was trained on, and there is enough data to fine-tune the model.
    • Using a pre-trained model as an initialization: In this approach, the pre-trained model is used as an initialization for training a new model from scratch. This is useful when the new task data is different from the data that the pre-trained model was trained on, but still has some similarities, and there is enough data to train a new model.

  1. What are the weight initialization techniques in Deep Learning?.⭐

Ans:

Weight initialization is an important step in training deep neural networks, as it can have a significant impact on the network's performance and ability to converge. There are several weight initialization techniques that can be used, each with its own advantages and disadvantages. Some commonly used weight initialization techniques include:

    1. Random initialization: In this technique, the weights are initialized to random values drawn from a normal or uniform distribution. This is a simple and widely used technique, but it can lead to slow convergence or getting stuck in a suboptimal solution.
    1. Xavier initialization: This technique is used to initialize the weights for the layers with saturating activation functions, such as sigmoid and tanh. It uses a normal distribution with a standard deviation of sqrt(1/(number of input neurons)) to initialize the weights. This initialization method helps to balance the scale of the input and output of each neuron and can speed up convergence.
    1. He initialization: This technique is used to initialize the weights for the layers with non-saturating activation functions, such as ReLU and its variants. It uses a normal distribution with a standard deviation of sqrt(2/(number of input neurons)) to initialize the weights. This initialization method helps to prevent the vanishing gradient problem and can speed up the convergence.
    1. LeCun initialization: This technique is similar to the He initialization, but it uses a uniform distribution with a range of sqrt(3/(number of input neurons)) to initialize the weights. It is specifically designed for the layers with non-saturating activation functions, such as sigmoid.

  1. How do Neural Networks get the optimal Weights and Bias values?.⭐

Ans:

  • A multi-headed CNN is a type of neural network architecture that uses multiple parallel branches, each with its own set of layers, to process different aspects of the input data. These parallel branches are also known as "heads" and they are typically combined at the end of the network to make a final prediction. Multi-headed CNNs are useful for tasks that involve multiple types of data, such as image and text data.
  • A multi-channel CNN is a type of neural network architecture that uses multiple parallel branches, each with its own set of filters, to process different channels of the input data. Each channel of the input data corresponds to a different type of feature, such as different color channels in an image.
  • Multi-headed CNNs are used to process multiple types of data, such as image and text, while multi-channel CNNs are used to process different channels of a single type of data, such as different color channels in an image. Both architectures can improve the performance of the network by learning different features from the data.

📙 Back to Top Section

Deep Learning - NLP

  1. Explain every block of Attention Networks and working principles..⭐

Ans:Attention networks are a type of neural network architecture that allows the model to focus on specific parts of the input while processing it. They are commonly used in natural language processing tasks such as machine translation and text summarization. Here are the main components of attention networks and their working principles:

  • Encoder: The encoder is responsible for processing the input and creating a representation of it. This can be done using a recurrent neural network (RNN) or a transformer network. The encoder creates a sequence of hidden states, which are then used as the input for the attention mechanism.

  • Attention Mechanism: The attention mechanism is responsible for determining which parts of the input are most important for the task at hand. It takes in the hidden states created by the encoder, as well as a query vector that represents the task-specific information. The attention mechanism then calculates a weight for each hidden state, indicating how important it is for the task. These weights are then used to create a weighted sum of the hidden states, which is used as the input for the decoder.

  • Decoder: The decoder is responsible for generating the output based on the attention-weighted input from the attention mechanism. The decoder can also be implemented using an RNN or a transformer network. It generates an output sequence one step at a time, using the attention-weighted input and its own hidden state to make decisions about what to generate next.

  • Attention Weights: Attention weights are the values that are calculated by the attention mechanism to indicate how important each input is for the task at hand. They are calculated by comparing the query vector to the hidden states created by the encoder. The attention weights are then used to create a weighted sum of the hidden states, which is used as the input for the decoder.

  • Multi-head Attention: Multi-head attention is an extension of the basic attention mechanism, where multiple attention mechanisms are used in parallel. Each attention mechanism is applied to the same input using a different query vector, and the results are concatenated together. This allows the model to attend to multiple parts of the input simultaneously, and can improve performance on certain tasks.

In summary, Attention Networks are a neural network architecture that allows the model to focus on specific parts of the input while processing it. They use an encoder to process the input, an attention mechanism to determine which parts of the input are most important for the task at hand, and a decoder to generate the output. Attention weights are used to indicate how important each input is for the task, and Multi-head Attention is used to attend to multiple parts of the input simultaneously.


  1. Explain Attention with real-time example..⭐

Ans:Sure, let's take an example of a sentence as input to the attention network: "I want to order pizza with extra cheese."

  • Encoder: The encoder takes in the input sentence and processes it to create a representation of it. This can be done using an RNN or a transformer network. In the case of an RNN, the encoder creates a sequence of hidden states, one for each word in the sentence. For example, the hidden states for the input sentence might be [h1, h2, h3, h4, h5, h6].

  • Attention Mechanism: The attention mechanism takes in the hidden states created by the encoder, as well as a query vector that represents the task-specific information. For example, the query vector might represent the task of translation, and it's used to compare the hidden states with the query to calculate the attention weights for each hidden state. These attention weights indicate how important each hidden state is for the task of translation.

  • Decoder: The decoder takes the attention-weighted input from the attention mechanism, which is a weighted sum of the hidden states, and generates an output sequence one step at a time. The decoder can be implemented using an RNN or a transformer network. In the case of a machine translation task, the decoder generates a translated sentence one word at a time, using the attention-weighted input and its own hidden state to make decisions about what to generate next.

    • Attention Weights: Attention weights are the values that are calculated by the attention mechanism to indicate how important each hidden state is for the task at hand. For example, the word "pizza" might have a high attention weight, because it is a key concept in the task of ordering food, whereas the word "I" might have a low attention weight.
  • Multi-head Attention: Multi-head attention is an extension of the basic attention mechanism, where multiple attention mechanisms are used in parallel. Each attention mechanism is applied to the same input using a different query vector, and the results are concatenated together. This allows the model to attend to multiple parts of the input simultaneously, and can improve performance on certain tasks.

Attention Blog

  1. Explain every block of transformers and its working taking an input sequence string as an example..⭐

Ans: Transformers are a type of neural network architecture that are commonly used in natural language processing tasks such as machine translation and text summarization. They were introduced in the paper "Attention Is All You Need" by Google researchers in 2017. Here are the main components of transformers and their working principles:

  • Embedding Layer: The input sequence, such as a sentence, is passed through an embedding layer, which converts each word in the sentence into a dense vector representation. This embedding is learned during the training process, and it captures the semantic and syntactic information of the words in the sentence.

  • Multi-Head Self-Attention Layer: The transformer uses self-attention to determine which parts of the input are most important for the task at hand. Multi-head attention is used to apply multiple attention mechanisms in parallel, with each attention mechanism using a different query, key, and value. The attention mechanism calculates the dot product between the query, key, and value matrices, and applies a softmax function to the result to obtain the attention weights. These attention weights are then used to weight the value matrix and create the output of the multi-head self-attention layer.

  • Position-wise Feed-forward Layer: This layer is applied to the output of the multi-head self-attention layer, and it consists of two linear transformations with a ReLU activation in between. This layer is responsible for learning the more complex relationships between the words in the sentence.

  • Add & Norm Layer: The output of the position-wise feed-forward layer is added to the output of the multi-head self-attention layer, and then passed through a normalization layer. This normalization is used to ensure the stability of the model during training.

  • Decoder: The decoder takes the output of the add & norm layer, and generates the final output, such as a translated sentence or a summary of the input sentence

tarnsformers

Transformer Blog

  1. What is the need of Self Attention Layer in the transformer?.⭐

Ans:The self-attention layer in the transformer is used to determine which parts of the input are most important for the task at hand. It allows the model to focus on specific parts of the input while processing it, rather than processing the entire input in a sequential manner. This is particularly useful in natural language processing tasks, where the meaning of a sentence can depend on the relationships between words that are far apart in the sentence.

  • The self-attention mechanism in the transformer uses dot product attention, which calculates the dot product between the query, key, and value matrices for each position in the input sequence. This dot product is then used to calculate attention weights, which indicate how important each position in the input sequence is for the task at hand. These attention weights are then used to weight the value matrix, creating the output of the self-attention layer.
  • The multi-head attention mechanism in the transformer applies multiple attention mechanisms in parallel, with each attention mechanism using a different query, key, and value. This allows the model to attend to multiple parts of the input simultaneously, and can improve performance on certain tasks.

  1. Explain every component of BERT transformer in detail with a suitable example..⭐

`Ans: BERT (Bidirectional Encoder Representations from Transformers) is a transformer-based language model developed by Google that is trained on a large corpus of text data. Here are the main components of BERT and their working principles:

  • Embedding Layer: Like any transformer, BERT starts with an embedding layer that converts each token (word or subword) in the input sentence into a dense vector representation. These representations are learned during the training process, and they capture the semantic and syntactic information of the tokens.

  • Encoder: BERT uses a multi-layer transformer encoder to process the input sentence. Each layer in the encoder consists of a multi-head self-attention mechanism and a position-wise feed-forward neural network. The self-attention mechanism allows the model to attend to different parts of the input sentence simultaneously and the feed-forward network is used to learn complex relationships between the tokens.

  • Bidirectional context: BERT is pre-trained on a large corpus of text data with a bidirectional context, which means that it is able to understand the context of a word based on the words that come before and after it in a sentence. This is done by training the model on a task where it has to predict a word based on its surrounding context, this task is called "masked language modeling" or MLM.

  • Pre-training and Fine-Tuning: BERT is pre-trained on a large corpus of text data, which allows it to learn general-purpose representations of language. These representations can then be fine-tuned on specific tasks, such as named entity recognition, question answering, and sentiment analysis, by training the model on a smaller dataset for the specific task.


  1. What are the evaluation metrics in question-answering system using Transformers?.⭐

Ans: There are several evaluation metrics that can be used to evaluate the performance of a question-answering system using Transformers, some of the most common ones are:

  • Exact match accuracy: This metric measures the proportion of questions for which the model's predicted answer exactly matches the ground truth answer. This is a simple and straightforward metric, but it may not take into account variations in wording or synonyms.

  • F1 score: This is the harmonic mean of precision and recall. It is commonly used when the dataset has an imbalanced distribution of positive and negative examples. In question answering, the precision is calculated as the number of correctly predicted answers divided by the total number of predicted answers, and recall is calculated as the number of correctly predicted answers divided by the total number of ground truth answers.

  • Mean Reciprocal Rank (MRR): This metric is a measure of how quickly the model finds the correct answer. It is calculated as the average of the reciprocal ranks of the correct answers for all the questions. The reciprocal rank is the reciprocal of the rank of the first correct answer, so the MRR is a measure of the average rank of the first correct answer.

  • Mean Average Precision (MAP): This metric is similar to the mean reciprocal rank but it is a measure of the overall quality of the answers returned by the model. It is calculated as the mean of the average precision scores for each question, where the average precision is the average of the precision scores calculated at the position of each relevant answer.

  • EM/F1 on SQuAD 2.0: SQuAD 2.0 is a widely used question answering dataset, and EM/F1 scores are commonly used as evaluation metrics for it. Exact match (EM) is a binary score that measures whether the predicted answer exactly matches one of the ground-truth answers, whereas F1 is the harmonic mean of precision and recall.


  1. How is RoBERTa different from BERT transformer?.⭐

Ans:RoBERTa (Robustly Optimized BERT Pretraining Approach) is a variation of the BERT transformer that was developed by researchers at Facebook AI. RoBERTa is similar to BERT in that it is a transformer-based language model that is trained on a large corpus of text data, but there are several key differences between the two models:

  • Training data: RoBERTa is trained on a much larger corpus of text data than BERT. RoBERTa uses a combination of the BooksCorpus and English Wikipedia dataset, which is over 1.5 times larger than the dataset used to train BERT.

  • Training objective: BERT is trained on two objectives, masked language modeling, and next sentence prediction. RoBERTa, on the other hand, is only trained on the masked language modeling task.

  • Batch size: RoBERTa uses a larger batch size during training than BERT, which allows the model to learn from more data at once and achieve higher performance.

  • Learning rate schedule: RoBERTa uses a different learning rate schedule than BERT. RoBERTa uses a linear warmup schedule and a constant learning rate throughout the training process.

  • Number of training steps: RoBERTa is trained for more steps than BERT, which allows the model to learn more from the larger dataset and achieve higher performance.

  • Dropout rate: RoBERTa uses a lower dropout rate than BERT, which allows the model to retain more information during training and achieve higher performance.


  1. How Confidence score is calculated in Roberta transformer based Question Answering System..⭐

Ans: In a RoBERTa-based question answering system, confidence scores are calculated to indicate how confident the model is in its predicted answer. The exact method of calculating the confidence score can vary depending on the specific implementation of the model, but some common methods include:

  • Softmax probability: One way to calculate the confidence score is to use the softmax probability of the predicted answer. This is calculated by taking the exponential of the predicted logit for the answer, and normalizing it by the sum of the exponentials of the logits for all possible answers. The softmax probability ranges from 0 to 1 and represents the likelihood of the answer being correct.

  • Max-pooling: Another way to calculate the confidence score is to use max-pooling over the attention scores for the predicted answer. This is calculated by taking the maximum attention score from all the attention heads for the predicted answer. The max-pooled score ranges from 0 to 1 and represents the degree of alignment between the question and the answer.

  • Mean-pooling: Similar to max-pooling, another way to calculate the confidence score is to use mean-pooling over the attention scores for the predicted answer. This is calculated by taking the average attention score from all the attention heads for the predicted answer.

  • MLM score: Another way to calculate the confidence score is by using the scores of the masked language modeling task. The MLM task is the task of predicting a token given its surrounding context, this score indicates how likely is that the predicted answer is a valid word in the language.


📙 Back to Top Section