use iPython notebook with virtualenv

It is possible to use ipython notebook inside a virtual environment. To do that, it is necessary to create a new ipython kernel and link it with the virtual environment.

  • Create virtualenv
  • mkvirtualenv my_test

    If the virtualenv my_test has been created before

  • workon my_test
  • Install the ipython kernel module into your virtualenv

  • pip install ipython[notebook]

    or use this

  • pip install ipykernel
    
  • Now run the kernel “self-install” script:
    python -m ipykernel install --user --name=my_test
use iPython notebook with virtualenv

python function decorators

Python’s decorator is a powerful tool to wrap (decorate) functions, such as add some pre-processing steps before the function, or post-processing steps after it. This following example is modified based on the example from the functools doc page, and perfectly illustrated two important points, as marked in red below.

from functools import wraps
def my_decorator(f):
    @wraps(f)
    def wrapper(*args, **kwds):
        """Docstring of wrapper"""
        print 'Calling decorated function'
        return f(*args, **kwds)
    return wrapper

@my_decorator
def example(foo):
    """Docstring of example"""
    print 'Called example function'
    return(foo+1)

print(example(2))
print(example.__name__)
print(example.__doc__)
 The output should be:
Calling decorated function
Called example function
3
example
Docstring of example
  1. If wraps(f) were not called, then the output lines 4 and 5 would be “wrapper” and “Docstring of wrapper”. (Of course generally you would not write a docstring for the wrapper function)
  2. If return were not used, the output line 3 would be “None”.

 

 

python function decorators

On financial independence

To achieve financial independence, a state in which one’s asset generates more interest than one’s expense, it is generally assumed that one has to save early and save big, due to the power of compounding. This depends on one critical assumption, that the rolling return over 30 years remains largely the same regardless of the entry point in a 30 year period. Although accurate prediction of future returns is difficult to perform, computer simulations could be more easily applied to past data to confirm the assumption.

Data TBD by Miaomiao. 

Personal Goals:

  • Control monthly expense under 5k.
  • Based on this expense and the 4% rule, 1.5 M is needed to achieve perpetual financial freedom.
  • Achieve financial independence by age 40.
  • Save 71k per year after year, plus 401(k).

Skills to be developed by MM:

  • Trading/investment skills
  • General machine learning

Skills to be developed by TT:

  • Deep learning
  • Computer vision
  • Skills that is not employer specific, such as software development skills, apps development, English skills, speech skills.
On financial independence

installation of tensorflow on windows or MacOS

Windows:

I had the following error when installing tensorflow on windows 7 (Python 3.5.2 :: Anaconda 4.1.1 (64-bit))

Cannot remove entries from nonexistent file d:\anaconda32\envs\tst\lib\site-pack
ages\easy-install.pth

This is a anaconda environment problem. I found the solution here:

pip install --ignore-installed --upgrade pip setuptools
pip install --upgrade tensorflow

Now add the python from anaconda package permanently to cygwin.

echo 'export PATH=/cygdrive/c/anaconda3:$PATH' >> .bashrc

MacOS

Install TensorFlow in virtualenv to avoid version contamination.

mkvirtualenv cv -p python3
workon cv
pip3 install --upgrade tensorflow # for Python 3.n

Validate the installation

import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))

Run a test model

git clone https://github.com/tensorflow/models.git
cd ~/TensorFlow/models/tutorials/image/imagenet
python classify_image.py --image_file ~/TensorFlow/daisy.jpeg

TBD

installation of tensorflow on windows or MacOS

Affine and Perspective Transformation

In affine transformation (link, link2), all parallel lines in the original image will still be parallel in the output image. To find the transformation matrix, we need 3 points from input image and their corresponding locations in output image. Then cv2.getAffineTransform will create a 2×3 matrix which is to be passed to cv2.warpAffine. Affine transform can perform rotation, translation, resizing,

pts1 = np.float32([[50,50],[200,50],[50,200]])
pts2 = np.float32([[10,100],[200,50],[100,250]])

M = cv2.getAffineTransform(pts1,pts2)
dst = cv2.warpAffine(img,M,(cols,rows))

For perspective transformation (see links above), you need a 3×3 transformation matrix. Straight lines will remain straight even after the transformation. To find this transformation matrix, you need 4 points on the input image and corresponding points on the output image. Among these 4 points, 3 of them should not be collinear. Then transformation matrix can be found by the function cv2.getPerspectiveTransform. Then apply cv2.warpPerspective with this 3×3 transformation matrix.

pts1 = np.float32([[56,65],[368,52],[28,387],[389,390]])
pts2 = np.float32([[0,0],[300,0],[0,300],[300,300]])

M = cv2.getPerspectiveTransform(pts1,pts2)
dst = cv2.warpPerspective(img,M,(300,300))

In summary,

  • Affine transformation preserves lines and parallelism.
  • Perspective transformation preserves lines. Affine transform is a special case of perspective transformation.
  • PS, affine transformation does not preserve angle. Conformal transformation preserves angle.

 

Affine and Perspective Transformation

Important concepts in CV

This post serves as a list of miscellaneous techniques that litters in the CV field. They are not listed in any particular order for now.

  1. Hard-negative mining. For each image and each possible scale of each image in your negative training set, apply the sliding window technique and slide your window across the image. At each window compute your HOG descriptors and apply your classifier. If your classifier (incorrectly) classifies a given window as an object (and it will, there will absolutely be false-positives), record the feature vector associated with the false-positive patch along with the probability of the classification. This approach is called hard-negative mining. Take the false-positive samples found during the hard-negative mining stage, sort them by their confidence (i.e. probability) and re-train your classifier using these hard-negative samples. (Note: You can iteratively apply steps 4-5, but in practice one stage of hard-negative mining usually [not not always] tends to be enough. The gains in accuracy on subsequent runs of hard-negative mining tend to be minimal.)
  2. Non-Maximum Suppression (link) can be used if multiple bounding boxes are returned for the same detected object.
  3. find bounding shapes (openCV link): boundingRect() finds the up-right bounding rectangle of a point set, minAreaRect() finds the rotated bounding rectangle of a point set which is often used with boxPoints() returning the four vertices of the rectangle, minEnclosingTriangle() finds the bounding triangle with min area, minEnclosingCircle() finds the bounding circle with min area.
  4. overlapping object detection, use watershed algorithm (openCV tutorial, PIS tutorial).
  5. Thresholding is to reduce a gray scale image to a binary image. Automatic (parameterless) threshold detection is usually more computationally intensive than those requiring a manual tuning process. Two widely used methods are Otsu’s method and Ridler-Calvard’s method, both of which are histogram-based thresholding method.
  6. The Otsu’s method assumes the pixels in a gray scale image are divided into two classes, the foreground and the background, following a bimodal histogram and finds the global optimal threshold to minimize the intra-cluster variance, or equivalently, maximize the inter-cluster variance. However, when the image background is uneven, finding a global threshold that generates good results may simply be impossible. This original method can be extended to a 2D Otsu’s adaptive method which find local threshold based on the gray scale value of each pixel and the average of its neighboring pixels. This can help greatly with noise corrupted images or images with uneven background (nonuniform illumination). Theoretically any method used for estimating the threshold can be made adaptive if applied locally in a block-wise or sliding window fashion, but the computational cost may be quite high, such as the 2D Otsu’s method. Ridler-Calvard’s method is an iterative version of Otsu’s method, and is generally faster and less computationally intensive as Otsu’s method.
  7. Drawbacks of Otsu’s method: it assumes the histogram is bimodal; it applies a global threshold and thus does not work with uneven background; it breaks when the two classes have extremely different sizes.
  8. Multilevel thresholding can be applied when there are more than 2 modes in the histogram, but it proves to be more difficult in practice.
  9. Histogram-based thresholding methods works the best when histogram peaks are tall, narrow, symmetric, and separated by deep valleys. If there is no clear valley in the histogram, that means there are background pixels with similar gray levels with object pixels. In this case, hysteresis thresholding which employs two threshold values, one at each side of the valley can be used. The threshold ratio is generally between 2:1 and 3:1. In hysteresis thresholding, low thresholded edges which are connected to high thresholded edges are retained. Low thresholded edges which are non connected to high thresholded edges are removed. Hysteresis thresholding is the only method that considers some form of spatial proximity. Other methods completely ignores spatial information.
  10. Niblack’s method is a much less computationally intensive version, which finds the local threshold to be t(i, j) = μ(i, j) + wσ(i, j), using a weighted average of local mean and standard deviation, but w need to be tuned manually.
  11. Edge detection employs gradients to find edge-like regions. The Sobel operator finds the partial derivatives of the image along the x- and y-axes by convolving with a ksize x ksize kernel. When ksize is 3, Sobel operator may generate noticeable inaccuracies. A similar Scharr operator is as fast but generates more accurate results. Laplacian operator adds up the second order derivatives along the x- and y-axes calculated by the Sobel operator.
  12. Canny detector is also called optimal detector, which has low error rate, good localization and minimal response. It has four steps: Filter out noise, find the intensity gradient of the image, apply non maximum suppression (only thin lines will remain), apply hysteresis thresholding using two thresholds.
  13. Contours can be found by calling cv2.findContours() function, which takes an input image of 8-bit single channel (grayscale) image. The image is treated as binary since nonzero values are treated as one. This binary image can be generated using threshold(), adaptiveThreshold() or Canny(), etc. The contour finding algorithm uses this algorithm and returns a topological hierarchy of contours.
  14. to be continued…
Important concepts in CV

ML notes

ROC curve characterizes the performance of a binary classifier as its discrimination threshold varies. It plots the true positive rate (TPR) against the false positive rate (FPR). In other words, it plots the recall/sensitivity against the (1-specificity).

 

 

ML notes