Classification on Bank Marketing dataset

The Bank Marketing dataset was used in Wisaeng, K. (2013). A comparison of different classification techniques for bank direct marketing. International Journal of Soft Computing and Engineering (IJSCE), 3(4), 116-119.

The data is related with direct marketing campaigns (phone calls) of a Portuguese banking institution. The classification goal is to predict if the client will subscribe a term deposit (variable y). The data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be (‘yes’) or not (‘no’) subscribed.

There are four datasets:
1) bank-additional-full.csv with all examples (41188) and 20 inputs, ordered by date (from May 2008 to November 2010), very close to the data analyzed in [Moro et al., 2014]
2) bank-additional.csv with 10% of the examples (4119), randomly selected from 1), and 20 inputs.
3) bank-full.csv with all examples and 17 inputs, ordered by date (older version of this dataset with less inputs).
4) bank.csv with 10% of the examples and 17 inputs, randomly selected from 3 (older version of this dataset with less inputs).
The smallest datasets are provided to test more computationally demanding machine learning algorithms (e.g., SVM).

The classification goal is to predict if the client will subscribe (yes/no) a term deposit (variable y).

The columns in this dataset are:

  • age
  • job
  • marital
  • education
  • default
  • housing
  • loan
  • contact
  • month
  • day_of_week
  • duration
  • campaign
  • pdays
  • previous
  • poutcome
  • emp.var.rate
  • cons.price.idx
  • cons.conf.idx
  • euribor3m
  • nr.employed

 

Data Mining Syllabus – PyMathCamp

Demand for Data science talent is exploding. McKinsey estimates that by 2018, a 500,000 strong workforce of data scientists will be needed in US alone. The resulting talent gap must be filled by a new generation of data scientists. The term data scientist is quite ambiguous. The Center for Data Science at New York University describe data science as,

the study of the generalizable extraction of knowledge from data [using] mathematics, machine learning, artificial intelligence, statistics, databases and optimization, along with a deep understanding of the craft of problem formulation to engineer effective solutions

Data science.

Data science.

As you can see, a data scientist is a professional with a multidisciplinary profile. Optimizing the value of data is dependent on the skills of the data scientists who process the data.

Intellij.my is offering these essentials with PyMathCamp. This course is your stepping stone to become a data scientist. Key concepts in data acquisition, preparation, exploration and visualization along with examples on how to build interactive data science solutions are presented using Ipython notebooks.
You will learn to write Python code and apply data science techniques to many field of interest, for example in finance, robotic, marketing, gaming, computer vision, speech recognition and many more. By the end of this course, you will know how to build machine learning models and derive insights from data science.

The course is organized into 11 chapters. The major components of PyMathCamp are:

1) Data management (extract, transform, load, storing, cleaning and transformation)

We begin with studying data warehousing and OLAP, data cubes technology and multidimensional databases. (Chapter 2, 3 and 4)

2) Data Mining (machine learning technology, math and statistics)

Descriptive statistics are applied for data exploration. Mining Frequent Patterns, Association and Correlations. We will also learn more on the different types of machine learning methodology through python programming. (Chapter 5)

3) Data Analysis/Prescription (classification, regression, clustering, visualization)

At this stage, we are ready to dive into data modelling with different types of machine learning methods. PyMathcamp includes many different machine learning techniques to analyse and mine data, including linear regression, logistic regression, support vector machines, ensembling and clustering among numerous others. Model construction and validation are studied. This rigorous data modelling process is further enhanced with graphical visualisation. The end result will lead to insight for intelligent decision making. (Chapter 6 and 7)

Source: Pethuru (2014)

Source: Pethuru (2014)

Encapsulating data science intelligence and investing in modelling is vital for any organization to be successful.

Hence, we will use our data mining knowledge gained from the above chapters to analyse, extract and mine different types of data for value. Or more specifically spatial and spatiotemporal data, object, multimedia, text, time series and web data. (Chapter 8, 9 and 10)

After spending a few months learning and programming with PyMathCamp, we will end the course by updating you with the latest applications and trends of data mining. (Chapter 11)

In conclusion, PyMathCamp is the perfect course for student who might not have the rigorous technical and programming background required to do data science on their own.

Credit to: Joe Choong

“Future belongs to those who figure out how to collect and use data successfully.” 

Muhammad Nurdin, CEO of IntelliJ.

button

Education content providers for data science and mathematic needed.

We are hiring education content providers for data science and mathematic.

Requirements:
1. Smart and get job done.
2. Experience in Python and Jupiter Notebook.
3. Excellent English writing skill.
4. The education content must be completed in 6 months.
5. A computer science degree.
6. Experience with Machine Learning technique.

Syllabus: http://intellij.my/PyMathCamp_syllabus.pdf

Please do not hesitate to reach nurdin@intellij.my for more inquiries.

Data science.

Data science.

Grouping on Github Event using k-means clustering

In this section we will use k-means clustering to group developers based on how similar their situation has been event-by-event. That is, we will cluster the data based in the 26 variables that we have. Continue previously from http://intellij.my/2016/03/30/dimensionality-reduction-on-github-event-using-pca-approach/ .

And now we are ready to plot, using the cluster column as color.

cluster plot chart

cluster plot chart

 

Dimensionality Reduction on Github Event using PCA approach

This case study using Github Event dataset focus on Malaysia’s developers.

This is a read-only API to the GitHub events. These events power the various activity streams on the site.

github star wars

github star wars

The columns in this dataset are:

  1. a_login
  2. e_CommitCommentEvent
  3. e_CreateEvent
  4. e_DeleteEvent
  5. e_DeploymentEvent
  6. e_DeploymentStatusEvent
  7. e_DownloadEvent
  8. e_FollowEvent
  9. e_ForkEvent
  10. e_ForkApplyEvent
  11. e_GistEvent
  12. e_GollumEvent
  13. e_IssueCommentEvent
  14. e_IssuesEvent
  15. e_MemberEvent
  16. e_MembershipEvent
  17. e_PageBuildEvent
  18. e_PublicEvent
  19. e_PullRequestEvent
  20. e_PullRequestReviewCommentEvent
  21. e_PushEvent
  22. e_ReleaseEvent
  23. e_RepositoryEvent
  24. e_StatusEvent
  25. e_TeamAddEvent
  26. e_WatchEvent

Sample Github Event data.

sample Github Event data

sample Github Event data

Lower dimension representation of our data frame.

lower dimension representation of our data frame

lower dimension representation of our data frame

Explained variance ratio.

explained variance ratio

explained variance ratio

Plot on the data frame.

plot on the data frame

plot on the data frame

Re-scaled mean per a_login across all the events.

re-scaled mean per a_login across all the events

re-scaled mean per a_login across all the events

Bubble plot chart (a_login mean).

bubble plot chart (a_login mean)

bubble plot chart (a_login mean)

Bubble plot chart (a_login sum).

bubble plot chart (a_login sum)

bubble plot chart (a_login sum)

Intelligence Traffic Light Control using Machine Learning Algorithms

One of theoretical in intelligence traffic light control is computational learning theory. It’s analyze computational complexity of machine learning algorithms. There are two types of machine learning, supervised learning, unsupervised learning and regression learning. It is mainly deal with supervised learning.  Supervised learning is learning where the sample dataset is labeled with useful information. There are two variable types of supervised learning, categorical and continuous.  Categorical variable (nominal variable) is one that has two or more categories. For example, male and female. Continuous variable can only take on a certain number of values. For example, 1 or 2. We can conclude the hypothesis where the intelligence traffic light control using machine learning algorithm based on supervise sample training data.

Sample training dataset from driver’s behavior to determine traffic light status.

Sample training dataset from driver’s behavior to determine traffic light status.

In intelligence traffic light system, we believe the system embedded with proper sophisticated communication and sensor network system. The traffic lights are able to communicate each other so it can utilize more resources to ever increasing travelling times and diminishing waiting times before red traffic lights. The information gathered from sensor network system applied inside the traffic light so it can study driver’s behavior. Besides that, drivers will get the traffic information from mobile app given by the government so they can plan well before they drive to their destination. More advanced traffic light system when emergency vehicle such as police or ambulance go through the road so the traffic light will remain green to avoid any collision with other vehicles.

Intelligence traffic light control with communication and sensor network system.

Intelligence traffic light control with communication and sensor network system.

Based on observation and experimental retrieved from traffic light sensor, driver’s behavior can be collected and convert into valuable information to diminish waiting times before red traffic lights occur. Besides that, the data also been collected will be transform into information using machine language algorithm to create an efficient and accurate model for prediction analysis. Using predictive analysis knowledge, waiting times can be reduced even limited resources provided by current infrastructures lead to ever increasing travelling times. There are a lot of machine language algorithms such as neural network, linear regression, random forest, KNN and many more. The more data been collected, the more model will be accurate because the model not only suitable on certain time, it’s need to be supervise from time to time.

Deep learning understanding.

Deep learning understanding.

Regression on Airfoil Self-Noise dataset using Linear Regression approach

This case study using Airfoil Self-Noise dataset.

The NASA data set comprises different size NACA 0012 airfoils at various wind tunnel speeds and angles of attack. The span of the airfoil and the observer position were the same in all of the experiments.

The columns in this dataset are:

  1. A = Frequency
  2. B = Angle of attack
  3. C = Chord length
  4. D = Free-stream velocity
  5. E = Suction side displacement thickness
  6. F = Scaled sound pressure level

Sample Airfoil Self-Noise data

Sample Airfoil Self-Noise data

Sample Airfoil Self-Noise data

Prediction variables (attributes)

  1. Frequency, in Hertzs.
  2. Angle of attack, in degrees.
  3. Chord length, in meters.
  4. Free-stream velocity, in meters per second.
  5. Suction side displacement thickness, in meters.

Target variables

  1. Scaled sound pressure level, in decibels.
shape of the DataFrame

shape of the DataFrame

There are 1503 observations in the dataset.

Scatter plots

Scatter plots

Use Statsmodels to estimate the model coefficients for the Airfoil Self-Noise data with B (angle of attack):

model coefficients for the Airfoil Self-Noise data

model coefficients for the Airfoil Self-Noise data

Interpreting Model Coefficients

Interpretation angle of attack coefficient (β1)

  • A “unit” increase in angle of attack is associated with a 0.008927 “unit” increase in F (scaled sound pressure level).

Using the Model for Prediction

Let’s say that where the Angle of attack increased was 70. What would we predict for the scaled sound pressure level? (First approach for prediction)

126.309388 + (0.008927 * 70) = 126.934278

Thus, we would predict scaled sound pressure level of 126.934278.

Use Statsmodels to make the prediction: (Second approach for prediction)

Statsmodels to make the prediction

Statsmodels to make the prediction

Plotting the Least Squares Line

Make predictions for the smallest and largest observed values of x, and then use the predicted values to plot the least squares line:

DataFrame with the minimum and maximum values of B

DataFrame with the minimum and maximum values of B

 

predictions for those x values

predictions for those x values

least squares line

least squares line

 

confidence intervals for the model coefficients

confidence intervals for the model coefficients

Data Visualization (Scatter Plot) on Forest Fires dataset

The Forest Fires dataset was used in D. Zhang, Y. Tian and P. Zhang 2008 paper, Kernel-Based Nonparametric Regression Method.

In [Cortez and Morais, 2007], the output ‘area’ was first transformed with a ln(x+1) function. Then, several Data Mining methods were applied. After fitting the models, the outputs were post-processed with the inverse of the ln(x+1) transform. Four different input setups were used. The experiments were conducted using a 10-fold (cross-validation) x 30 runs. Two regression metrics were measured: MAD and RMSE. A Gaussian support vector machine (SVM) fed with only 4 direct weather conditions (temp, RH, wind and rain) obtained the best MAD value: 12.71 +- 0.01 (mean and confidence interval within 95% using a t-student distribution). The best RMSE was attained by the naive mean predictor. An analysis to the regression error curve (REC) shows that the SVM model predicts more examples within a lower admitted error. In effect, the SVM model predicts better small fires, which are the majority.

The columns in this dataset are:

  • X
  • Y
  • month
  • day
  • FFMC
  • DMC
  • DC
  • ISI
  • temp
  • RH
  • wind
  • rain
  • area

The scatter plot was been generated using Pandas (http://pandas.pydata.org/) and Matplotlib (http://matplotlib.org/).

Sample forest fires data

Sample forest fires data

Sample forest fires data

Prediction variables (attributes)

  1. X – x-axis spatial coordinate within the Montesinho park map: 1 to 9
  2. Y – y-axis spatial coordinate within the Montesinho park map: 2 to 9
  3. month – month of the year: ‘jan’ to ‘dec’
  4. day – day of the week: ‘mon’ to ‘sun’
  5. FFMC – FFMC index from the FWI system: 18.7 to 96.20
  6. DMC – DMC index from the FWI system: 1.1 to 291.3
  7. DC – DC index from the FWI system: 7.9 to 860.6
  8. ISI – ISI index from the FWI system: 0.0 to 56.10
  9. temp – temperature in Celsius degrees: 2.2 to 33.30
  10. RH – relative humidity in %: 15.0 to 100
  11. wind – wind speed in km/h: 0.40 to 9.40
  12. rain – outside rain in mm/m2 : 0.0 to 6.4

Target variables

  1. area – the burned area of the forest (in ha): 0.00 to 1090.84 (this output variable is very skewed towards 0.0, thus it may make sense to model with the logarithm transform).
shape of the DataFrame

shape of the DataFrame

There are 517 observations in the dataset.

Scatter plots

Scatter plots

Classification on Adult dataset

The Adult dataset was used in Ron Kohavi 2011 paper, Scaling Up the Accuracy of Naive-Bayes Classi ers: a Decision-Tree Hybrid.

Predict whether income exceeds $50K/yr based on census data. Also known as “Census Income” dataset. Extraction was done by Barry Becker from the 1994 Census database. Prediction task is to determine whether a person makes over 50K a year.

The columns in this dataset are:

  • age
  • workclass
  • fnlwgt
  • education
  • education-num
  • maritial-status
  • occupation
  • relationship
  • race
  • sex
  • capital-gain
  • capital-loss
  • hours-per-week
  • native-country

The model was been generated using Random Forest approach (http://scikit-learn.org/stable/), Pandas (http://pandas.pydata.org/) and Numpy (http://www.numpy.org/).

Sample adult data

Sample adult data

sample adult data

Summary of numerical fields

summary of numerical fields

summary of numerical fields

Examples number of each incomes

Examples number of each incomes

Examples number of each incomes

True means have missing value else False.

True means have missing value else False

True means have missing value else False

Model Output generated.

Model Output.

Model Output.