TensorFlow Tutorial for Data Scientist – Part 6

Neural Network on Keras

Create a new Jupyter notebook with python 2.7 kernel. Name it as TensorFlow Deep Learning. Let’s import all the required modules. Import libraries and modules.

Keras

Keras

%pylab inline

import numpy as np
np.random.seed(123)  # for reproducibility

from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils

Populating the interactive namespace from numpy and matplotlib. Load image data from MNIST.

from keras.datasets import mnist
 
# Load pre-shuffled MNIST data into train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()

Shape of the dataset

print X_train.shape

60,000 samples in the training set, and the images are 28 pixels x 28 pixels each. Plotting the first sample in matplotlib

from matplotlib import pyplot as plt
plt.imshow(X_train[0])

Preprocess input data for Keras. When using the Theano backend, you must explicitly declare a dimension for the depth of the input image. For example, a full-color image with all 3 RGB channels will have a depth of 3. Our MNIST images only have a depth of 1, but we must explicitly declare that. In other words, we want to transform our dataset from having shape (n, width, height) to (n, depth, width, height).

X_train = X_train.reshape(X_train.shape[0], 1, 28, 28)
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28)

Show X_train’s dimensions again

print X_train.shape

The final preprocessing step for the input data is to convert our data type to float32 and normalize our data values to the range [0, 1].

X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255

Preprocess class labels for Keras. Check shape of our class label data.

print y_train.shape

Convert 1-dimensional class arrays to 10-dimensional class matrices

Y_train = np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)

Check shape of our class label data again

print Y_train.shape

Define model architecture. Declaring a sequential model format

model = Sequential()

Declare the input layer which is (1, 28, 28) that corresponds to the (depth, width, height) of each digit image

model.add(Convolution2D(32, (3, 3), activation='relu', input_shape=(1,28,28), data_format='channels_first'))

Shape of the current model output

print model.output_shape

Add more layers to our model. For Dense layers, the first parameter is the output size of the layer. Keras automatically handles the connections between layers.

Note that the final layer has an output size of 10, corresponding to the 10 classes of digits.

Also note that the weights from the Convolution layers must be flattened (made 1-dimensional) before passing them to the fully connected Dense layer.

model.add(Convolution2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))

Compile model. Declare the loss function and the optimizer (SGD, Adam, etc.).

model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

Fit model on training data. Declare the batch size and number of epochs to train for, then pass in training data

model.fit(X_train, Y_train, batch_size=32, epochs=10, verbose=1)

Evaluate model on test data.

score = model.evaluate(X_test, Y_test, verbose=0)

Predict the image

Y_test = model.predict_classes(X_test, verbose=2)

Learn Tensorflow in 1 day

Learn Tensorflow in 1 day
====================
Join my Tensorflow workshop and learn how to explore & analyze dataset including image and text in just 1 day. Only basic programming knowledge is required.

Upon completion of this course, you will be able to:
a. Useful TensorFlow operators
b. Tensorflow Neural Networks
c. TensorFlow Deep Learning
d. TensorFlow Convolutional Neural Networks (CNN) for image recognition
e. TensorFlow Recurrent Neural Networks (RNN) for text analysis
f. Keras

Course fee is only RM300/pax (course materials and meals are included). Interested? Fill in your details through below link and I’ll be in touch soon:

http://bit.ly/2vVTqC7

Tensorflow

Tensorflow

TensorFlow Tutorial for Data Scientist – Part 4

TensorFlow Convolutional Neural Networks (CNN)

TensorflowTensorflow

Create a new Jupyter notebook with python 2.7 kernel. Name it as TensorFlow CNN. In this tutorial we will train a simple classifier to classify images of birds. Open your Chrome browser and install Fatkun Batch Download Image. Google this keyword malabar pied hornbill. Select Images and click Fatkun Batch Download Image icon on the right top. Select This tab and new windows will appear.

Fatkun Batch Download Image

Fatkun Batch Download Image

Unselect which images that not related to malabar pied hornbill bird category then click Save Image. Make sure minimum images that need to be train is 75. Wait until all images finish download. Copy all the images and place it into <your_working_space>tf_files > birds > imagesMalabar Pied Hornbill. Repeat the same steps over and over again for these categories.

sacred kingfisher
pied kingfisher
common hoopoe
layard s parakeet
owl
sparrow
brahminy kite
sparrowhawk
wallcreeper
bornean ground cuckoo
blue crowned hanging parrot

Download retrain script (https://raw.githubusercontent.com/datomnurdin/tensorflow-python/master/retrain.py) to the current directory (<your_working_space>) . Go to terminal/command line and cd to <your_working_space> directory. Run this command to retrain all the images. It takes around 30 minutes to finish.

python retrain.py 
--bottleneck_dir=tf_files/bottlenecks 
--model_dir=tf_files/inception 
--output_graph=tf_files/retrained_graph.pb 
--output_labels=tf_files/retrained_labels.txt 
--image_dir <your_absolute_path>/<your_working_space>/tf_files/birds

Create a prediction script and load generated model into it.

import tensorflow as tf
import sys

# change this as you see fit
image_path = sys.argv[1]

# Read in the image_data
image_data = tf.gfile.FastGFile(image_path, 'rb').read()

# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line 
                   in tf.gfile.GFile("tf_files/retrained_labels.txt")]

# Unpersists graph from file
with tf.gfile.FastGFile("tf_files/retrained_graph.pb", 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    _ = tf.import_graph_def(graph_def, name='')

with tf.Session() as sess:
    # Feed the image_data as input to the graph and get first prediction
    softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
    
    predictions = sess.run(softmax_tensor, \
             {'DecodeJpeg/contents:0': image_data})
    
    # Sort to show labels of first prediction in order of confidence
    top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]
    
    for node_id in top_k:
        human_string = label_lines[node_id]
        score = predictions[0][node_id]
        print('%s (score = %.5f)' % (human_string, score))

Predict the image using terminal/command line.

python detect.py test_image.png

Continue for part 5, .

TensorFlow Tutorial for Data Scientist – Part 3

TensorFlow Deep Learning

Create a new Jupyter notebook with python 2.7 kernel. Name it as TensorFlow Deep Learning. Let’s import all the required modules.

import tensorflow as tf
import tempfile
import pandas as pd
import urllib

Define Base Feature Columns that will be the building blocks used by both the wide part and the deep part of the model.

tf.logging.set_verbosity(tf.logging.ERROR)

# Categorical base columns.
gender = tf.contrib.layers.sparse_column_with_keys(column_name="gender", keys=["Female", "Male"])
race = tf.contrib.layers.sparse_column_with_keys(column_name="race", keys=[
  "Amer-Indian-Eskimo", "Asian-Pac-Islander", "Black", "Other", "White"])
education = tf.contrib.layers.sparse_column_with_hash_bucket("education", hash_bucket_size=1000)
relationship = tf.contrib.layers.sparse_column_with_hash_bucket("relationship", hash_bucket_size=100)
workclass = tf.contrib.layers.sparse_column_with_hash_bucket("workclass", hash_bucket_size=100)
occupation = tf.contrib.layers.sparse_column_with_hash_bucket("occupation", hash_bucket_size=1000)
native_country = tf.contrib.layers.sparse_column_with_hash_bucket("native_country", hash_bucket_size=1000)

# Continuous base columns.
age = tf.contrib.layers.real_valued_column("age")
age_buckets = tf.contrib.layers.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
education_num = tf.contrib.layers.real_valued_column("education_num")
capital_gain = tf.contrib.layers.real_valued_column("capital_gain")
capital_loss = tf.contrib.layers.real_valued_column("capital_loss")
hours_per_week = tf.contrib.layers.real_valued_column("hours_per_week")

The wide model is a linear model with a wide set of sparse and crossed feature columns:

wide_columns = [
  gender, native_country, education, occupation, workclass, relationship, age_buckets,
  tf.contrib.layers.crossed_column([education, occupation], hash_bucket_size=int(1e4)),
  tf.contrib.layers.crossed_column([native_country, occupation], hash_bucket_size=int(1e4)),
  tf.contrib.layers.crossed_column([age_buckets, education, occupation], hash_bucket_size=int(1e6))
]

The Deep Model: Neural Network with Embeddings

deep_columns = [
  tf.contrib.layers.embedding_column(workclass, dimension=8),
  tf.contrib.layers.embedding_column(education, dimension=8),
  tf.contrib.layers.embedding_column(gender, dimension=8),
  tf.contrib.layers.embedding_column(relationship, dimension=8),
  tf.contrib.layers.embedding_column(native_country, dimension=8),
  tf.contrib.layers.embedding_column(occupation, dimension=8),
  age, education_num, capital_gain, capital_loss, hours_per_week
]

Combining Wide and Deep Models into one

model_dir = tempfile.mkdtemp()
m = tf.contrib.learn.DNNLinearCombinedClassifier(
    fix_global_step_increment_bug=True,
    model_dir=model_dir,
    linear_feature_columns=wide_columns,
    dnn_feature_columns=deep_columns,
    dnn_hidden_units=[100, 50])

Process input data

# Define the column names for the data sets.
COLUMNS = ["age", "workclass", "fnlwgt", "education", "education_num",
  "marital_status", "occupation", "relationship", "race", "gender",
  "capital_gain", "capital_loss", "hours_per_week", "native_country", "income_bracket"]
LABEL_COLUMN = 'label'
CATEGORICAL_COLUMNS = ["workclass", "education", "marital_status", "occupation",
                       "relationship", "race", "gender", "native_country"]
CONTINUOUS_COLUMNS = ["age", "education_num", "capital_gain", "capital_loss",
                      "hours_per_week"]

# Download the training and test data to temporary files.
# Alternatively, you can download them yourself and change train_file and
# test_file to your own paths.
train_file = tempfile.NamedTemporaryFile()
test_file = tempfile.NamedTemporaryFile()
urllib.urlretrieve("http://mlr.cs.umass.edu/ml/machine-learning-databases/adult/adult.data", train_file.name)
urllib.urlretrieve("http://mlr.cs.umass.edu/ml/machine-learning-databases/adult/adult.test", test_file.name)

# Read the training and test data sets into Pandas dataframe.
df_train = pd.read_csv(train_file, names=COLUMNS, skipinitialspace=True)
df_test = pd.read_csv(test_file, names=COLUMNS, skipinitialspace=True, skiprows=1)
df_train[LABEL_COLUMN] = (df_train['income_bracket'].apply(lambda x: '>50K' in x)).astype(int)
df_test[LABEL_COLUMN] = (df_test['income_bracket'].apply(lambda x: '>50K' in x)).astype(int)

def input_fn(df):
  # Creates a dictionary mapping from each continuous feature column name (k) to
  # the values of that column stored in a constant Tensor.
  continuous_cols = {k: tf.constant(df[k].values)
                     for k in CONTINUOUS_COLUMNS}
  # Creates a dictionary mapping from each categorical feature column name (k)
  # to the values of that column stored in a tf.SparseTensor.
  categorical_cols = {k: tf.SparseTensor(
      indices=[[i, 0] for i in range(df[k].size)],
      values=df[k].values,
      dense_shape=[df[k].size, 1])
                      for k in CATEGORICAL_COLUMNS}
  # Merges the two dictionaries into one.
  feature_cols = dict(continuous_cols.items() + categorical_cols.items())
  # Converts the label column into a constant Tensor.
  label = tf.constant(df[LABEL_COLUMN].values)
  # Returns the feature columns and the label.
  return feature_cols, label

def train_input_fn():
  return input_fn(df_train)

def eval_input_fn():
  return input_fn(df_test)

Training and evaluating The Model

m.fit(input_fn=train_input_fn, steps=200)
results = m.evaluate(input_fn=eval_input_fn, steps=1)
for key in sorted(results):
    print("%s: %s" % (key, results[key]))

Continue for part 4, http://intellij.my/2017/08/08/tensorflow-tutorial-for-data-scientist-part-4/

TensorFlow Tutorial for Data Scientist – Part 2

Tensorflow Neural Networks

Create a new Jupyter notebook with python 2.7 kernel. Name it as TensorFlow Neural Networks. Let’s import all the required modules.

%pylab inline

import os
import numpy as np
import pandas as pd
from scipy.misc import imread
from sklearn.metrics import accuracy_score
import tensorflow as tf

Set a seed value, so that we can control our models randomness

# To stop potential randomness
seed = 128
rng = np.random.RandomState(seed)

Set directory paths

root_dir = os.path.abspath('../')
data_dir = os.path.join(root_dir, 'tensorflow-tutorial/data')
sub_dir = os.path.join(root_dir, 'tensorflow-tutorial/sub')

# check for existence
os.path.exists(root_dir)
os.path.exists(data_dir)
os.path.exists(sub_dir)

Read the datasets. These are in .csv formats, and have a filename along with the appropriate labels

train = pd.read_csv(os.path.join(data_dir, 'Train', 'train.csv'))
test = pd.read_csv(os.path.join(data_dir, 'Test.csv'))

train.head()

Read images

img_name = rng.choice(train.filename)
filepath = os.path.join(data_dir, 'Train', 'Images', 'train', img_name)

img = imread(filepath, flatten=True)

pylab.imshow(img, cmap='gray')
pylab.axis('off')
pylab.show()

Show image in numpy array format

img

Store all our images as numpy arrays

temp = []
for img_name in train.filename:
    image_path = os.path.join(data_dir, 'Train', 'Images', 'train', img_name)
    img = imread(image_path, flatten=True)
    img = img.astype('float32')
    temp.append(img)
    
train_x = np.stack(temp)

temp = []
for img_name in test.filename:
    image_path = os.path.join(data_dir, 'Train', 'Images', 'test', img_name)
    img = imread(image_path, flatten=True)
    img = img.astype('float32')
    temp.append(img)

test_x = np.stack(temp)

Split size of 70:30 for train set vs validation set

split_size = int(train_x.shape[0]*0.7)

train_x, val_x = train_x[:split_size], train_x[split_size:]
train_y, val_y = train.label.values[:split_size], train.label.values[split_size:]

Define some helper functions

def dense_to_one_hot(labels_dense, num_classes=10):
    """Convert class labels from scalars to one-hot vectors"""
    num_labels = labels_dense.shape[0]
    index_offset = np.arange(num_labels) * num_classes
    labels_one_hot = np.zeros((num_labels, num_classes))
    labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
    
    return labels_one_hot

def preproc(unclean_batch_x):
    """Convert values to range 0-1"""
    temp_batch = unclean_batch_x / unclean_batch_x.max()
    
    return temp_batch

def batch_creator(batch_size, dataset_length, dataset_name):
    """Create batch with random samples and return appropriate format"""
    batch_mask = rng.choice(dataset_length, batch_size)
    
    batch_x = eval(dataset_name + '_x')[[batch_mask]].reshape(-1, input_num_units)
    batch_x = preproc(batch_x)
    
    if dataset_name == 'train':
        batch_y = eval(dataset_name).ix[batch_mask, 'label'].values
        batch_y = dense_to_one_hot(batch_y)
        
    return batch_x, batch_y

Define a neural network architecture with 3 layers; input, hidden and output. The number of neurons in input and output are fixed, as the input is our 28 x 28 image and the output is a 10 x 1 vector representing the class. We take 500 neurons in the hidden layer. This number can vary according to your need. We also assign values to remaining variables.

### set all variables

# number of neurons in each layer
input_num_units = 28*28
hidden_num_units = 500
output_num_units = 10

# define placeholders
x = tf.placeholder(tf.float32, [None, input_num_units])
y = tf.placeholder(tf.float32, [None, output_num_units])

# set remaining variables
epochs = 5
batch_size = 128
learning_rate = 0.01

### define weights and biases of the neural network (refer this article if you don't understand the terminologies)

weights = {
    'hidden': tf.Variable(tf.random_normal([input_num_units, hidden_num_units], seed=seed)),
    'output': tf.Variable(tf.random_normal([hidden_num_units, output_num_units], seed=seed))
}

biases = {
    'hidden': tf.Variable(tf.random_normal([hidden_num_units], seed=seed)),
    'output': tf.Variable(tf.random_normal([output_num_units], seed=seed))
}

Create neural networks computational graph

hidden_layer = tf.add(tf.matmul(x, weights['hidden']), biases['hidden'])
hidden_layer = tf.nn.relu(hidden_layer)

output_layer = tf.matmul(hidden_layer, weights['output']) + biases['output']

Define cost of our neural network

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = output_layer, labels=y))

Set the optimizer, i.e. our backpropogation algorithm (Adam, Gradient Descent algorithm)

optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

Initialize all the variables

init = tf.global_variables_initializer()

Create a session, and run the neural network in the session. Then validate the models accuracy on validation set that just created

with tf.Session() as sess:
    # create initialized variables
    sess.run(init)
    
    ### for each epoch, do:
    ###   for each batch, do:
    ###     create pre-processed batch
    ###     run optimizer by feeding batch
    ###     find cost and reiterate to minimize
    
    for epoch in range(epochs):
        avg_cost = 0
        total_batch = int(train.shape[0]/batch_size)
        for i in range(total_batch):
            batch_x, batch_y = batch_creator(batch_size, train_x.shape[0], 'train')
            _, c = sess.run([optimizer, cost], feed_dict = {x: batch_x, y: batch_y})
            
            avg_cost += c / total_batch
            
        print "Epoch:", (epoch+1), "cost =", "{:.5f}".format(avg_cost)
    
    print "\nTraining complete!"
    
    
    # find predictions on val set
    pred_temp = tf.equal(tf.argmax(output_layer, 1), tf.argmax(y, 1))
    accuracy = tf.reduce_mean(tf.cast(pred_temp, "float"))
    print "Validation Accuracy:", accuracy.eval({x: val_x.reshape(-1, input_num_units), y: dense_to_one_hot(val_y)})
    
    predict = tf.argmax(output_layer, 1)
    pred = predict.eval({x: test_x.reshape(-1, input_num_units)})

Test the model and visualize its predictions

img_name = rng.choice(test.filename)
filepath = os.path.join(data_dir, 'Train', 'Images', 'test', img_name)

img = imread(filepath, flatten=True)

test_index = int(img_name.split('.')[0]) - 49000
print "Prediction is: ", pred[test_index]

pylab.imshow(img, cmap='gray')
pylab.axis('off')
pylab.show()

Continue for part 3, http://intellij.my/2017/08/07/tensorflow-tutorial-for-data-scientist-part-3/.

TensorFlow Tutorial for Data Scientist – Part 1

Setup environment

Install Python 2.7. XX, https://www.python.org/downloads/. Then install Tensorflow using PIP, https://www.tensorflow.org/install/.

pip install tensorflow
TensorFlow

TensorFlow

Install Jupyter via PIP.

pip install jupyter

Create a new folder called tensorflow-tutorial and cd into that folder via terminal. Run jupyter notebook command.

Useful TensorFlow operators

The official documentation carefully lays out all available math ops: https://www.tensorflow.org/api_docs/Python/math_ops.html.

Some specific examples of commonly used operators include:

tf.add(x, y) 
Add two tensors of the same type, x + y
tf.sub(x, y) 
Subtract tensors of the same type, x — y
tf.mul(x, y) 
Multiply two tensors element-wise
tf.pow(x, y) 
Take the element-wise power of x to y
tf.exp(x) 
Equivalent to pow(e, x), where e is Euler’s number (2.718…)
tf.sqrt(x) 
Equivalent to pow(x, 0.5)
tf.div(x, y) 
Take the element-wise division of x and y
tf.truediv(x, y) 
Same as tf.div, except casts the arguments as a float
tf.floordiv(x, y) 
Same as truediv, except rounds down the final answer into an integer
tf.mod(x, y) 
Takes the element-wise remainder from division

Create a new Jupyter notebook with python 2.7 kernel. Name it as TensorFlow operators. Lets write a small program to add two numbers.

# import tensorflow
import tensorflow as tf

# build computational graph
a = tf.placeholder(tf.int16)
b = tf.placeholder(tf.int16)

addition = tf.add(a, b)

# initialize variables
init = tf.global_variables_initializer()

# create session and run the graph
with tf.Session() as sess:
    sess.run(init)
    print "Addition: %i" % sess.run(addition, feed_dict={a: 2, b: 3})

# close session
sess.close()

Exercise: Try all these operations and check the output. tf.add(x, y), tf.sub(x, y), tf.mul(x, y), tf.pow(x, y), tf.sqrt(x), tf.div(x, y) & tf.mod(x, y).

Continue for part 2, http://intellij.my/2017/08/07/tensorflow-tutorial-for-data-scientist-part-2.

Image Recognition using Tensorflow

Step 1

Install Tensorflow using PIP, https://www.tensorflow.org/install/.

Step 2

Download bird images from Google image using this chrome extension, https://chrome.google.com/webstore/detail/fatkun-batch-download-ima/nnjjahlikiabnchcpehcpkdeckfgnohf?hl=en. Create 12 folders for all birds inside tf_files > birds. Make sure each bird folder contain at least 60-70 images and same category.

bird folder name

bird folder name

Step 3

Use this command to retrain your custom images

sudo python retrain.py --bottleneck_dir=tf_files/bottlenecks --model_dir=tf_files/inception --output_graph=tf_files/retrained_graph.pb --output_labels=tf_files/retrained_labels.txt --image_dir /tf_files/birds

Step 4

Predict

python detect.py sample.png
prediction

prediction

Reference: https://github.com/datomnurdin/tensorflow-python

Android SQLite Tutorial

In this tutorial you will develop a simple to-do-list native app using SQLite for android.

Download the required software packages

Download and install Android Studio and Android SDK.

Android Studio + SDK – http://developer.android.com/sdk/index.html

Setting up your development environment

Open your Android Studio and choose Start a new Android Studio project.

Start a new Android Studio project

Start a new Android Studio project

Enter your custom Application name, Company Domain and select Project location. Click Next.

Configure your new project.

Configure your new project.

Select Phone and Tablet. Make sure API 15 selected. Click Next.

Configure your new project.

Configure your new project.

Select Empty Activity and click Next.

Add an activity to Mobile.

Add an activity to Mobile.

Click Finish.

Customize the activity.

Customize the activity.

Create a new java class. Right click on my.intellij.androidsqlite package > New >Activity > Java Class.

Create a new java class.

Create a new java class.

Name it as Task and click Ok.

Create New Class

Create New Class

Then replace with this code in Task.java. Set your Task class with all getter and setter methods to maintain single task as an object.

package my.intellij.androidsqlite;

public class Task {

    //private variables
    int taskId;
    String name;
    String description;

    // Empty constructor
    public Task(){

    }
    // constructor
    public Task(int taskId, String name, String description){
        this.taskId = taskId;
        this.name = name;
        this.description = description;
    }

    // constructor
    public Task(String name, String description){
        this.name = name;
        this.description = description;
    }
    // getting taskId
    public int getTaskId(){
        return this.taskId;
    }

    // setting taskId
    public void setID(int taskId){
        this.taskId = taskId;
    }

    // getting name
    public String getName(){
        return this.name;
    }

    // setting name
    public void setName(String name){
        this.name = name;
    }

    // getting description
    public String getDescription(){
        return this.description;
    }

    // setting description
    public void setDescription(String description){
        this.description = description;
    }
}

Create a new java class. Right click on my.intellij.androidsqlite package > New >Activity > Java Class.

Create a new java class.

Create a new java class.

Name it as DatabaseHandler and click Ok.

Create New Class

Create New Class

Write DatabaseHandler class to handle all database CRUD(Create, Read, Update and Delete) operations.

package my.intellij.androidsqlite;

import android.content.ContentValues;
import android.content.Context;
import android.database.Cursor;
import android.database.sqlite.SQLiteDatabase;
import android.database.sqlite.SQLiteOpenHelper;

import java.util.ArrayList;
import java.util.List;

public class DatabaseHandler extends SQLiteOpenHelper {
    // All Static variables
    // Database Version
    private static final int DATABASE_VERSION = 1;

    // Database Name
    private static final String DATABASE_NAME = "tasksManager";

    // Tasks table name
    private static final String TABLE_TASKS = "tasks";

    // Tasks Table Columns names
    private static final String KEY_TASK_ID = "taskId";
    private static final String KEY_NAME = "name";
    private static final String KEY_DESCRIPTION = "description";

    public DatabaseHandler(Context context) {
        super(context, DATABASE_NAME, null, DATABASE_VERSION);
    }

    // Creating Tables
    @Override
    public void onCreate(SQLiteDatabase db) {
        String CREATE_TASKS_TABLE = "CREATE TABLE " + TABLE_TASKS + "("
                + KEY_TASK_ID + " INTEGER PRIMARY KEY," + KEY_NAME + " TEXT,"
                + KEY_DESCRIPTION + " TEXT" + ")";
        db.execSQL(CREATE_TASKS_TABLE);
    }

    // Upgrading database
    @Override
    public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
        // Drop older table if existed
        db.execSQL("DROP TABLE IF EXISTS " + TABLE_TASKS);

        // Create tables again
        onCreate(db);
    }

    /**
     * All CRUD(Create, Read, Update, Delete) Operations
     */

    // Adding new task
    void addTask(Task task) {
        SQLiteDatabase db = this.getWritableDatabase();

        ContentValues values = new ContentValues();
        values.put(KEY_NAME, task.getName()); // Task Name
        values.put(KEY_DESCRIPTION, task.getDescription()); // Task Description

        // Inserting Row
        db.insert(TABLE_TASKS, null, values);
        db.close(); // Closing database connection
    }

    // Getting single task
    Task getTask(int taskId) {
        SQLiteDatabase db = this.getReadableDatabase();

        Cursor cursor = db.query(TABLE_TASKS, new String[] { KEY_TASK_ID,
                        KEY_NAME, KEY_DESCRIPTION }, KEY_TASK_ID + "=?",
                new String[] { String.valueOf(taskId) }, null, null, null, null);
        if (cursor != null)
            cursor.moveToFirst();

        Task task = new Task(Integer.parseInt(cursor.getString(0)),
                cursor.getString(1), cursor.getString(2));
        // return task
        return task;
    }

    // Getting All Tasks
    public List<Task> getAllTasks() {
        List<Task> taskList = new ArrayList<Task>();
        // Select All Query
        String selectQuery = "SELECT  * FROM " + TABLE_TASKS;

        SQLiteDatabase db = this.getWritableDatabase();
        Cursor cursor = db.rawQuery(selectQuery, null);

        // looping through all rows and adding to list
        if (cursor.moveToFirst()) {
            do {
                Task task = new Task();
                task.setID(Integer.parseInt(cursor.getString(0)));
                task.setName(cursor.getString(1));
                task.setDescription(cursor.getString(2));
                // Adding task to list
                taskList.add(task);
            } while (cursor.moveToNext());
        }

        // return task list
        return taskList;
    }

    // Updating single task
    public int updateTask(Task task) {
        SQLiteDatabase db = this.getWritableDatabase();

        ContentValues values = new ContentValues();
        values.put(KEY_NAME, task.getName());
        values.put(KEY_DESCRIPTION, task.getDescription());

        // updating row
        return db.update(TABLE_TASKS, values, KEY_TASK_ID + " = ?",
                new String[] { String.valueOf(task.getTaskId()) });
    }

    // Deleting single task
    public void deleteTask(Task task) {
        SQLiteDatabase db = this.getWritableDatabase();
        db.delete(TABLE_TASKS, KEY_TASK_ID + " = ?",
                new String[] { String.valueOf(task.getTaskId()) });
        db.close();
    }


    // Getting tasks Count
    public int getTasksCount() {
        String countQuery = "SELECT  * FROM " + TABLE_TASKS;
        SQLiteDatabase db = this.getReadableDatabase();
        Cursor cursor = db.rawQuery(countQuery, null);
        cursor.close();

        // return count
        return cursor.getCount();
    }
}

Open MainActivity and replace with this code.

package my.intellij.androidsqlite;

import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;

import java.util.List;

public class MainActivity extends AppCompatActivity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        DatabaseHandler db = new DatabaseHandler(this);

        /**
         * CRUD Operations
         * */
        // Inserting Tasks
        Log.d("Insert: ", "Inserting ..");
        db.addTask(new Task("Breakfast", "Nasi Lemak"));
        db.addTask(new Task("Lunch", "Nasi Ayam"));
        db.addTask(new Task("Dinner", "MCD"));

        // Reading all tasks
        Log.d("Reading: ", "Reading all tasks..");
        List<Task> tasks = db.getAllTasks();

        for (Task cn : tasks) {
            String log = "TaskId: "+cn.getTaskId()+" ,Name: " + cn.getName() + " ,Description: " + cn.getDescription();
            // Writing Tasks to log
            Log.d("Name: ", log);
        }
    }
}

Build and run the app. The output will be show up like below.

Logcat output.

Logcat output.

Android (GCM) Push Notification Tutorial

In this tutorial, I will show you how to implement push notifications in your native app for android.

Download the required software packages.

  1. Download and install Android Studio and Android SDK.
  2. Android Studio + SDK – http://developer.android.com/sdk/index.html.

Go to Google Developer (https://developers.google.com/mobile/add). Click Pick a platform.

Google services.

Google services.

Click Enable services for my Android App.

Google Services.

Google Services.

Enter your App name and Android package name. Click Choose and configure services.

Google Services.

Google Services.

Select Cloud Messaging and click Enable Google Cloud Messaging.

Google Services.

Google Services.

Make a note of the server API key and press Close.

Google Services.

Google Services.

Click Generate configuration files.

Google Services.

Google Services.

Click Download google-services.json. Once the file has been generated, download it and place it inside your Android Studio project’s app directory.

Google Services.

Google Services.

Setting up your development environment

Open your Android Studio and choose Start a new Android Studio project.

Start a new Android Studio project

Start a new Android Studio project

Enter your custom Application name, Company Domain and select Project location. Click Next.

Configure your new project.

Configure your new project.

Select Phone and Tablet. Make sure API 15 selected. Click Next.

Configure your new project.

Configure your new project.

Select Empty Activity and click Next.

Configure your new project

Configure your new project

Your app must request the C2D_MESSAGE  permissions, if it isn’t doing so already. Add the following lines inside the tag in your AndroidManifest.xml:

<permission android:name="my.intellij.androidpushnotification.permission.C2D_MESSAGE" android:protectionLevel="signature" />
    <uses-permission android:name="my.intellij.androidpushnotification.permission.C2D_MESSAGE" />

The notifications are received in the form of broadcasts. To handle those broadcasts, our app needs a BroadcastReceiver. However, we don’t have to create it manually. We can instead use the GcmReceiver class as the BroadcastReceiver.

The BroadcastReceiver must have an intent-filter that responds to the com.google.android.c2dm.intent.RECEIVE action and the name of its category must match your project’s package name. Add the following code to the manifest

<receiver android:name="com.google.android.gms.gcm.GcmReceiver" android:exported="true" android:permission="com.google.android.c2dm.permission.SEND" >
    <intent-filter>
        <action android:name="com.google.android.c2dm.intent.RECEIVE" />
        <category android:name="my.intellij.androidpushnotification" />
    </intent-filter>
</receiver>

Open build.gradle (Project:AndroidPushNotification) and add into dependencies block

classpath 'com.google.gms:google-services:1.5.0'

Next, apply the plugin in the app module’s build.gradle on the top

apply plugin: 'com.google.gms.google-services'

And this code on bottom part in the app module’s build.gradle.

configurations.all {
    resolutionStrategy {
        force 'com.android.support:design:23.4.0'
        force 'com.android.support:support-v4:23.4.0'
        force 'com.android.support:appcompat-v7:23.4.0'
    }
}

To be able to use the GCM API, add com.google.android.gms:play-services as a compile dependency in the same file

compile "com.google.android.gms:play-services:8.3.0"

Create a new java class. Right click on my.intellij.androidpushnotification package > New >Activity > Java Class.

Create a new java class.

Create a new java class.

Name it as RegistrationService and click Ok.

Create New Class

Create New Class

Then replace with this code in RegistrationService.java.

package my.intellij.androidpushnotification;

import android.app.IntentService;
import android.content.Intent;
import android.util.Log;

import com.google.android.gms.gcm.GcmPubSub;
import com.google.android.gms.gcm.GoogleCloudMessaging;
import com.google.android.gms.iid.InstanceID;

import java.io.IOException;

public class RegistrationService extends IntentService {
    public RegistrationService() {
        super("RegistrationService");
    }

    @Override
    protected void onHandleIntent(Intent intent) {
        InstanceID myID = InstanceID.getInstance(this);

        try {
            String registrationToken = myID.getToken(
                    getString(R.string.gcm_defaultSenderId),
                    GoogleCloudMessaging.INSTANCE_ID_SCOPE,
                    null
            );

            Log.d("Registration Token", registrationToken);

            GcmPubSub subscription = GcmPubSub.getInstance(this);
            subscription.subscribe(registrationToken, "/topics/my_little_topic", null);

        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

Define the service in AndroidManifest.xml.

<service android:name=".RegistrationService" android:exported="false" />

Create a new java class. Right click on my.intellij.androidpushnotification package > New >Activity > Java Class.

Create a new java class.

Create a new java class.

Name it as TokenRefreshListenerService and click Ok.

Create New Class

Create New Class

Then replace with this code in TokenRefreshListenerService.java.

package my.intellij.androidpushnotification;

import android.content.Intent;
import com.google.android.gms.iid.InstanceIDListenerService;

public class TokenRefreshListenerService extends InstanceIDListenerService {
    @Override
    public void onTokenRefresh() {
        Intent i = new Intent(this, RegistrationService.class);
        startService(i);
    }
}

Define the service in AndroidManifest.xml.

<service android:name=".TokenRefreshListenerService" android:exported="false">
        <intent-filter>
            <action android:name="com.google.android.gms.iid.InstanceID" />
        </intent-filter>
    </service>

Open MainActivity.java and replace with this code.

package my.intellij.androidpushnotification;

import android.content.Intent;
import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;

public class MainActivity extends AppCompatActivity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        Intent i = new Intent(this, RegistrationService.class);
        startService(i);
    }
}

Create a new java class. Right click on my.intellij.androidpushnotification package > New >Activity > Java Class.

Create a new java class.

Create a new java class.

Name it as NotificationsListenerService and click Ok.

Create New Class

Create New Class

Then replace with this code in NotificationsListenerService.java.

package my.intellij.androidpushnotification;

import com.google.android.gms.gcm.GcmListenerService;

public class NotificationsListenerService extends GcmListenerService {

}

Define the service in AndroidManifest.xml.

<service android:name=".NotificationsListenerService" android:exported="false" >
    <intent-filter>
        <action android:name="com.google.android.c2dm.intent.RECEIVE" />
    </intent-filter>
</service>

Go to https://material.io/icons/ and search for cloud icon. Once you download the icon, place it inside the res folder of your project. Use ic_cloud_black_48dp as the icon.

Google’s Material Design Icons Library

Google’s Material Design Icons Library

Build and run your app. You will an output like this in your android studio console.

Create send.py file and replace with this code.

from urllib2 import *
import urllib
import json
import sys

MY_API_KEY="AIzaSyCJu8pl_GmHLhM9SfFc-31vipg9rsfeD5I"

messageTitle = sys.argv[1]
messageBody = sys.argv[2]

data={
    "to" : "/topics/my_little_topic",
    "notification" : {
        "body" : messageBody,
        "title" : messageTitle,
        "icon" : "ic_cloud_black_48dp"
    }
}

dataAsJSON = json.dumps(data)

request = Request(
    "https://gcm-http.googleapis.com/gcm/send",
    dataAsJSON,
    { "Authorization" : "key="+MY_API_KEY,
      "Content-type" : "application/json"
    }
)

print urlopen(request).read()

Open terminal and run the send.py script.

python send.py “My first push notification” “GCM API is wonderful”

Running the Script

Running the Script

Android RTP (Video & Audio Stream via VLC Player)

In this tutorial, you’ll learn how to use Android Studio to start Android RTP project development. You will learn the following:

  • How to use Android Studio to create a native project.
  • How to use RTP in android project.

Download the required software packages

Open Android Studio.

Start a new Android Studio project

Start a new Android Studio project

Select Start a new Android Studio project. Enter your custom Application name, Company Domain and select Project location. Click Next.

Configure your new project.

Configure your new project.

Select Phone and Tablet. Make sure API 15 selected. Click Next.

Configure your new project.

Configure your new project.

Select Empty Activity and click Next.

Add an activity to Mobile.

Add an activity to Mobile.

Click Finish.

Customize the activity.

Customize the activity.

Import the libstreaming library (https://github.com/fyhertz/libstreaming) to Android Studio you can try this:

  • Open your project in Android Studio.
  • Download the library (using Git, or a zip archive to unzip).

Select File > New > Import Module.

New Module.

New Module.

Select library libstreaming source directory and click Next.

New Module.

New Module.

Click Finish.

New Module.

New Module.

On the root of your project directory create/modify the settings.gradle file. It should contain something like the following:

include 'app', ':libstreaming'
  • Gradle clean & build/close the project and reopen/re-import it.
  • Edit your project’s build.gradle to add this in the “depencies” section:
dependencies {
     //...
        compile project(':libstreaming')
     }

Open the MainActivity.java and replace with this code

package my.intellij.androidrtp;

import android.content.Intent;
import android.content.SharedPreferences;
import android.content.pm.ActivityInfo;
import android.os.Bundle;
import android.preference.PreferenceManager;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;
import android.view.SurfaceHolder;
import android.view.Window;
import android.view.WindowManager;

import net.majorkernelpanic.streaming.Session;
import net.majorkernelpanic.streaming.SessionBuilder;
import net.majorkernelpanic.streaming.audio.AudioQuality;
import net.majorkernelpanic.streaming.gl.SurfaceView;
import net.majorkernelpanic.streaming.rtsp.RtspServer;

/**
 * A straightforward example of how to use the RTSP server included in libstreaming.
 */
public class MainActivity extends AppCompatActivity implements SurfaceHolder.Callback, RtspServer.CallbackListener, Session.Callback{

    private final static String TAG = "MainActivity";

    private SurfaceView mSurfaceView;
    private  Session mSession;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
        
        setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_PORTRAIT);
        requestWindowFeature(Window.FEATURE_NO_TITLE);

        setContentView(R.layout.activity_main);

        mSurfaceView = (SurfaceView) findViewById(R.id.surface);

        
        // Sets the port of the RTSP server to 1234
        SharedPreferences.Editor editor = PreferenceManager.getDefaultSharedPreferences(this).edit();
        editor.putString(  RtspServer.KEY_PORT, String.valueOf(1234));
        editor.commit();



        // Configures the SessionBuilder
        mSession =  SessionBuilder.getInstance()
                .setCallback(this)
                .setSurfaceView((net.majorkernelpanic.streaming.gl.SurfaceView) mSurfaceView)
                .setPreviewOrientation(90)
                .setContext(getApplicationContext())
                .setAudioEncoder(SessionBuilder.AUDIO_AAC)
                .setAudioQuality(new AudioQuality(8000, 16000))
                .setVideoEncoder(SessionBuilder.VIDEO_H264)
                //.setVideoQuality(new VideoQuality(320,240,20,500000))
                .build();

        mSurfaceView.getHolder().addCallback(this);

        ((net.majorkernelpanic.streaming.gl.SurfaceView) mSurfaceView).setAspectRatioMode(net.majorkernelpanic.streaming.gl.SurfaceView.ASPECT_RATIO_PREVIEW);
        String ip, port, path;

        // Starts the RTSP server
        this.startService(new Intent(this,RtspServer.class));

        Log.d("test", "1");



        mSession.startPreview(); //camera preview on phone surface
        mSession.start();

    }

    @Override
    public void onResume()
    {
        super.onResume();
        mSession.stopPreview();
    }

    @Override
    public void onDestroy()
    {
        super.onDestroy();
        mSession.release();
        mSurfaceView.getHolder().removeCallback(this);
    }

    //region   ----------------------------------implement methods required
    @Override
    public void onError(RtspServer server, Exception e, int error) {
        Log.e("Server", e.toString());
    }

    @Override
    public void onMessage(RtspServer server, int message) {
        Log.e("Server", "unkown message");
    }

    @Override
    public void onBitrateUpdate(long bitrate) {

    }

    @Override
    public void onSessionError(int reason, int streamType, Exception e) {

    }

    @Override
    public void onPreviewStarted() {

    }

    @Override
    public void onSessionConfigured() {

    }

    @Override
    public void onSessionStarted() {

    }

    @Override
    public void onSessionStopped() {

    }

    @Override
    public void surfaceCreated(SurfaceHolder holder) {

    }

    @Override
    public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {

    }

    @Override
    public void surfaceDestroyed(SurfaceHolder holder) {

    }

    //endregion
}

Open activity_main.xml in app > res > layout and replace with this code.

<?xml version="1.0" encoding="utf-8"?>
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="my.intellij.androidrtp.MainActivity" android:color="@android:color/background_light">

    <net.majorkernelpanic.streaming.gl.SurfaceView android:id="@+id/surface" android:layout_width="match_parent" android:layout_height="match_parent" />

</RelativeLayout>

Open AndroidManifest.xml and replace with this code.

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="my.intellij.androidrtp">

    <application
        android:allowBackup="true"
        android:icon="@mipmap/ic_launcher"
        android:label="@string/app_name"
        android:supportsRtl="true"
        android:theme="@style/AppTheme">
        <activity android:name=".MainActivity">
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />

                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
        </activity>

        <service android:name="net.majorkernelpanic.streaming.rtsp.RtspServer" />
    </application>

    <uses-feature android:name="android.hardware.camera" />
    <uses-feature android:name="android.hardware.camera.autofocus" />

    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.RECORD_AUDIO" />
    <uses-permission android:name="android.permission.CAMERA" />
    <uses-permission android:name="android.permission.WAKE_LOCK" />

</manifest>

Build and run the Android rtp sender app. Now start the vlc player and click Media > Open Network Stream menu. Enter the following url (please update with your android ip) and click play button.

rtsp://192.168.0.113:1234

Open Network.

Open Network.