Data Mining Syllabus – PyMathCamp

Demand for Data science talent is exploding. McKinsey estimates that by 2018, a 500,000 strong workforce of data scientists will be needed in US alone. The resulting talent gap must be filled by a new generation of data scientists. The term data scientist is quite ambiguous. The Center for Data Science at New York University describe data science as,

the study of the generalizable extraction of knowledge from data [using] mathematics, machine learning, artificial intelligence, statistics, databases and optimization, along with a deep understanding of the craft of problem formulation to engineer effective solutions

Data science.

Data science.

As you can see, a data scientist is a professional with a multidisciplinary profile. Optimizing the value of data is dependent on the skills of the data scientists who process the data.

Intellij.my is offering these essentials with PyMathCamp. This course is your stepping stone to become a data scientist. Key concepts in data acquisition, preparation, exploration and visualization along with examples on how to build interactive data science solutions are presented using Ipython notebooks.
You will learn to write Python code and apply data science techniques to many field of interest, for example in finance, robotic, marketing, gaming, computer vision, speech recognition and many more. By the end of this course, you will know how to build machine learning models and derive insights from data science.

The course is organized into 11 chapters. The major components of PyMathCamp are:

1) Data management (extract, transform, load, storing, cleaning and transformation)

We begin with studying data warehousing and OLAP, data cubes technology and multidimensional databases. (Chapter 2, 3 and 4)

2) Data Mining (machine learning technology, math and statistics)

Descriptive statistics are applied for data exploration. Mining Frequent Patterns, Association and Correlations. We will also learn more on the different types of machine learning methodology through python programming. (Chapter 5)

3) Data Analysis/Prescription (classification, regression, clustering, visualization)

At this stage, we are ready to dive into data modelling with different types of machine learning methods. PyMathcamp includes many different machine learning techniques to analyse and mine data, including linear regression, logistic regression, support vector machines, ensembling and clustering among numerous others. Model construction and validation are studied. This rigorous data modelling process is further enhanced with graphical visualisation. The end result will lead to insight for intelligent decision making. (Chapter 6 and 7)

Source: Pethuru (2014)

Source: Pethuru (2014)

Encapsulating data science intelligence and investing in modelling is vital for any organization to be successful.

Hence, we will use our data mining knowledge gained from the above chapters to analyse, extract and mine different types of data for value. Or more specifically spatial and spatiotemporal data, object, multimedia, text, time series and web data. (Chapter 8, 9 and 10)

After spending a few months learning and programming with PyMathCamp, we will end the course by updating you with the latest applications and trends of data mining. (Chapter 11)

In conclusion, PyMathCamp is the perfect course for student who might not have the rigorous technical and programming background required to do data science on their own.

Credit to: Joe Choong

“Future belongs to those who figure out how to collect and use data successfully.” 

Muhammad Nurdin, CEO of IntelliJ.

button

Regression on Airfoil Self-Noise dataset using Linear Regression approach

This case study using Airfoil Self-Noise dataset.

The NASA data set comprises different size NACA 0012 airfoils at various wind tunnel speeds and angles of attack. The span of the airfoil and the observer position were the same in all of the experiments.

The columns in this dataset are:

  1. A = Frequency
  2. B = Angle of attack
  3. C = Chord length
  4. D = Free-stream velocity
  5. E = Suction side displacement thickness
  6. F = Scaled sound pressure level

Sample Airfoil Self-Noise data

Sample Airfoil Self-Noise data

Sample Airfoil Self-Noise data

Prediction variables (attributes)

  1. Frequency, in Hertzs.
  2. Angle of attack, in degrees.
  3. Chord length, in meters.
  4. Free-stream velocity, in meters per second.
  5. Suction side displacement thickness, in meters.

Target variables

  1. Scaled sound pressure level, in decibels.
shape of the DataFrame

shape of the DataFrame

There are 1503 observations in the dataset.

Scatter plots

Scatter plots

Use Statsmodels to estimate the model coefficients for the Airfoil Self-Noise data with B (angle of attack):

model coefficients for the Airfoil Self-Noise data

model coefficients for the Airfoil Self-Noise data

Interpreting Model Coefficients

Interpretation angle of attack coefficient (β1)

  • A “unit” increase in angle of attack is associated with a 0.008927 “unit” increase in F (scaled sound pressure level).

Using the Model for Prediction

Let’s say that where the Angle of attack increased was 70. What would we predict for the scaled sound pressure level? (First approach for prediction)

126.309388 + (0.008927 * 70) = 126.934278

Thus, we would predict scaled sound pressure level of 126.934278.

Use Statsmodels to make the prediction: (Second approach for prediction)

Statsmodels to make the prediction

Statsmodels to make the prediction

Plotting the Least Squares Line

Make predictions for the smallest and largest observed values of x, and then use the predicted values to plot the least squares line:

DataFrame with the minimum and maximum values of B

DataFrame with the minimum and maximum values of B

 

predictions for those x values

predictions for those x values

least squares line

least squares line

 

confidence intervals for the model coefficients

confidence intervals for the model coefficients

Data Visualization (Scatter Plot) on Forest Fires dataset

The Forest Fires dataset was used in D. Zhang, Y. Tian and P. Zhang 2008 paper, Kernel-Based Nonparametric Regression Method.

In [Cortez and Morais, 2007], the output ‘area’ was first transformed with a ln(x+1) function. Then, several Data Mining methods were applied. After fitting the models, the outputs were post-processed with the inverse of the ln(x+1) transform. Four different input setups were used. The experiments were conducted using a 10-fold (cross-validation) x 30 runs. Two regression metrics were measured: MAD and RMSE. A Gaussian support vector machine (SVM) fed with only 4 direct weather conditions (temp, RH, wind and rain) obtained the best MAD value: 12.71 +- 0.01 (mean and confidence interval within 95% using a t-student distribution). The best RMSE was attained by the naive mean predictor. An analysis to the regression error curve (REC) shows that the SVM model predicts more examples within a lower admitted error. In effect, the SVM model predicts better small fires, which are the majority.

The columns in this dataset are:

  • X
  • Y
  • month
  • day
  • FFMC
  • DMC
  • DC
  • ISI
  • temp
  • RH
  • wind
  • rain
  • area

The scatter plot was been generated using Pandas (http://pandas.pydata.org/) and Matplotlib (http://matplotlib.org/).

Sample forest fires data

Sample forest fires data

Sample forest fires data

Prediction variables (attributes)

  1. X – x-axis spatial coordinate within the Montesinho park map: 1 to 9
  2. Y – y-axis spatial coordinate within the Montesinho park map: 2 to 9
  3. month – month of the year: ‘jan’ to ‘dec’
  4. day – day of the week: ‘mon’ to ‘sun’
  5. FFMC – FFMC index from the FWI system: 18.7 to 96.20
  6. DMC – DMC index from the FWI system: 1.1 to 291.3
  7. DC – DC index from the FWI system: 7.9 to 860.6
  8. ISI – ISI index from the FWI system: 0.0 to 56.10
  9. temp – temperature in Celsius degrees: 2.2 to 33.30
  10. RH – relative humidity in %: 15.0 to 100
  11. wind – wind speed in km/h: 0.40 to 9.40
  12. rain – outside rain in mm/m2 : 0.0 to 6.4

Target variables

  1. area – the burned area of the forest (in ha): 0.00 to 1090.84 (this output variable is very skewed towards 0.0, thus it may make sense to model with the logarithm transform).
shape of the DataFrame

shape of the DataFrame

There are 517 observations in the dataset.

Scatter plots

Scatter plots