Section 1
##### Introduction to Analytics

Section 2
##### Understanding Probability and Probability Distribution

1

Introduction to Probability theory

2

Types of probability distribution – Discrete Distribution and Continuous distribution

3

Understanding Probability Mass Function and Probability Density Function

4

Normal Distribution and Standard Normal Distribution

5

Understanding Binomial Distribution and Poisson Distribution

6

Application on Binomial Distribution

7

Application on Normal Distribution

Section 3
##### Introduction to Sampling Theory and Estimation

1

Concept of Population and Sample

2

Introduction to Some important terminologies

3

Parameter and Statistic

4

Properties of a good estimator

5

Standard Deviation and Standard Error

6

Point and Interval Estimation

7

Confidence level and level of Significance

8

Constructing Confidence Intervals

9

Formulation of Null and Alternative hypothesis and performing simple test of Hypothesis

Section 4
##### Introduction to Segmentation Techniques: Factor Analysis

1

Introduction to Factor Analysis and various techniques

2

Principal Component Analysis (PCA) and Exploratory Factor Analysis (EFA)

3

KMO MSA test, Bartlett’s Test Sphericity

4

The Mineigen Criterion, Scree plot

5

Introduction to Factor Loading Matrix and various rotation techniques like Varimax

6

Application of the technique on a case study

7

Interpretation of the result

Section 5
##### Introduction to Segmentation Techniques: Cluster Analysis

1

Introduction to Cluster Analysis and various techniques

2

Hierarchical and Non – Hierarchical Clustering techniques

3

Using Hierarchical Clustering in R

4

Performing K – means Clustering in R

5

Divisive Clustering, Agglomerative Clustering

6

Application of Cluster Analysis in Analytics with Examples with profiling of the clusters and interpretation of the clusters

7

Application of the techniques on a case study

8

Interpretation of the result

Section 6
##### Correlation and Linear Regression

1

Introduction to Pearson’s Correlation coefficient

2

Correlation and Causation- Fitting a simple linear regression model

3

Introduction to CLRM

4

Assumptions of CLRM

5

Understanding the MLRM technique

6

Understanding the related statistic to linear regression

7

Goodness of fit test for linear regression

8

Importing dataset in R to apply linear regression

9

Splitting of dataset – Training and testing

10

Conducting several tests to understand the results obtained

11

Checking for the accuracy of the linear regression model

12

Assessing Collinearity, Heteroskedasticity and Auto – Correlation

Section 7
##### Introduction to categorical data analysis and Logistic Regression

1

Comparison between Liner Regression and Logistic Regression

2

Performing Goodness of fit test of the model

3

Introduction to Percent Concordant, AIC, SC, and Hosmer – Lemeshow

4

Receiver Operating Characteristics (ROC) Curve and Area under Curve (AUC)

5

Interpretation of the model: overall fit of the model and finding out the influential variables using Odds ratio criteria

6

Understanding the ROC testing

7

Checking for the accuracy of the model

8

Application and interpretation using case study

Section 8
##### Introduction to Time Series Analysis

1

What is Time series Analysis, Objectives and Assumptions of Time Series

2

Identifying pattern in Time series data: Decomposition of the time series data

3

Introduction to Various Smoothing techniques: Simple Moving Average, Weighted Moving Average

4

Exponential Smoothing, Holt’s Linear Exponential Smoothing Examples of Seasonality and detecting Seasonality in Time series data

5

Autoregressive and Moving Average models and Introduction to Box Jenkins Methodology

6

Introduction to Autoregressive Moving Average (ARMA) model and Autoregressive Integrated Moving Average (ARIMA) model

7

Building an ARIMA Model

8

Detection of Stationarity, Seasonality in ARIMA Model

9

Detecting the order of AR and MA of ARIMA model

10

Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF)

11

Detecting the order by using AIC and BIC criterion

12

Estimation and forecast using R

Section 9
##### Text Mining

1

Introduction to text mining

2

Importance of applying this technique

3

Package required in R to do text mining

4

Understanding WordCloud methodology

5

Performing text mining analysis using a data

6

Understanding the Sentiment Analysis

7

Application of the technique on a dataset

8

Interpretation of the result

Section 10
##### Market Basket Analysis

Section 11
##### Statistical Significance T Test Chi Square Tests and Analysis of Variance

1

Performing test of one sample mean

2

Difference between two group means (independent sample)

3

Difference between two group means (Paired sample)

4

Performing Chi square tests: Test of Independence

5

Descriptive statistics and inferential statistics

6

T-tests and it’s application on case studies

7

ANOVA testing and its application on case studies

8

Interpretation of the test results

9

Chi-square test of independence

10

Test for correlation and partial-correlation test

11

Performing post-hoc multiple comparisons tests in R using Tukey HSD

12

Performing two-way ANOVA with and without interactions

**Measures of Dispersion**

Dispersion is the measure of the extent to which the individual observation in a data set varies. It relates to those measures which capture the degree of heterogeneity of a set of statistical observation from a central value. Measuring heterogeneity involves construction of estimators, which pro-vide a standard or a representative value of the scatterings, as a function of all the sample observation. But, the heterogeneity of the data affects the efficiency of the estimator adversely, i.e. greater the dispersion in a data set lesser is the efficiency of the estimator. Therefore, to form an estimator of sufficient efficiency it is necessary to form an idea of the dispersion present in the data. The main classes of the measures of dispersion are:

*Absolute measure of Dispersion**Relative measure of Dispersion*

**Absolute Measure of Dispersion
**Absolute measures of dispersion refer to those measures of dispersion which depend on units of measurement. Hence, if the variability of two or more distributions with the same unit of measurement is to be compared then the absolute measures are helpful. The three main absolute measures of dispersion are:

*Range**Mean Deviation**Standard Deviation*

**Range
**The range of a set of statistical observations is defined as the highest and the lowest values in the set. This is the simplest method of measuring dispersion. Range is defined as: Range (X) = Xmax – Xmin where Xmax = Maximum value of the variable X, Xmin = Minimum value of the set X, X is a set containing observations x1, x2 …xn. Range can compare the variability of two or more distributions with the same units of measurement, but to compare the variability of the distribution given in the different units of measurement, the formula of range cannot be used.

**Mean Deviation
**Mean Deviation is defined as the arithmetic average of the deviations of various items from a measure of central tendency, may be mean, median or mode. Generally, mean deviation is calculated either from mean or median. Mean Deviation can also be calculated about any arbitrary average A.

**Standard Deviation
**Standard Deviation is considered to be an improvement over the mean deviation, since the former gets rid of signs, by taking instead of the absolute value of the deviation, the squares of the deviation of the variable about A. Standard Deviation is defined as: The positive square-root of the arithmetic mean of these quantities, i.e. it is the root-mean-squared deviation about A. The Standard Deviation is measured about the arithmetic mean of the data set since standard deviation is the least about mean. This is a striking feature of the measure of Standard deviation as a measure of dispersion.

**Relative Measure of Dispersion
**Relative Measures of Dispersion is defined as: Measures independent of the units of measurement and used for comparing dispersions of two or more distributions given in different units. Some of the most important measures of Relative dispersion are:

*Co-efficient of Range**Co-efficient of Variation**Co-efficient of Mean Deviation*

**Co-efficient of Range
**The compare the variability of a distribution with another, where the units of measurements are given in different units, it is not possible to use the absolute measure, range. The relative version of measuring the variability between the distributions is called the Coefficient of Range. Coefficient of Range is the ratio of difference between two extreme observations of their distribution to their sum.

**Co-efficient of Variation
**The Relative Measures of Dispersion based on Standard Deviation is called the Coefficient of Variation. This is a pure number independent of the units of measurement, and thus, it is suitable for comparing the variability, homogeneity or uniformity of two or more distributions. A distribution with smaller C.V. is said to be more homogeneous or less variable than the other, and a distribution with more C.V. is said to be more heterogeneous.

** Co-efficient of Mean Deviation
**The Coefficient of Mean Deviation is the relative measure associated with Mean-Deviation. It is de-fined as the ratio of the Mean Deviation and the Average about which it has been calculated.