The first two weeks of the Andrew Ng’s Machine Learning course at Coursera started quite simple and easy, specially for someone with initial knowledge on Statistics/Machine Learning. There were two basic tutorials on Linear Algebra and Octave. Octave is the chosen computer language of the course. Although simple, there is always something worth learning and keeping in mind. One thing that I try to take from this Machine Learning type material is the difference in treatment of the same subjects between statisticians and machine learning oriented people.

**Machine Learning definition**

Firstly, it tries to define Machine Learning. Even though there is no correct and unique definition, I would use the following one attributed to Tom Mitchell to define a learning problem:

“Well-posed Learning Problem: A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.”

**Supervised and Unsupervised Learning**

Secondly, it gives an intuition about the difference between supervised and unsupervised learning. I don’t know if there is a formal definition for this, but in my head, for supervised learning you have response variables (yes, I use statistical terms), while in unsupervised learning you don’t.

**Linear Regression with Multiple Variables**

The main focus of the lectures so far have been on linear regression with multiple variables. Now the way this is presented sounds very weird if you come from a statistical background.

Basically, you have your training set and you need to find the parameter that minimizes the cost function

where the “hypothesis” is given by

After you have found , the value of that minimizes , your best prediction for would be .

It is interesting to note that there is not much concerning confidence/credible intervals for parameter/prediction estimates. Also, not much discussion on what are the (model) assumptions behind all this.

**Finding **

*Analytical solution*

In the case of Linear regression, there is a closed form solution to find ,

where is the design matrix and is the response vector. Even though we have an analytical solution in this case, it becomes quite slow if the number of features is large, e.g. . This happens because in order to compute Eq. (2) we need to invert a matrix , which costs approximately . For large cases like this, it would be worth while to use a numerical solution to the problem. The optimization algorithm introduced was the Gradient Descent.

*Gradient Descent*

Gradient descent is a first-order optimization algorithm. If we want to find the minimum of with gradient descent, then we should repeat

until convergence, where is the gradient of . With defined as in Eq. (1) we have

Some tips on the application of gradient descent were given:

- Feature scaling: Transform all your features so that they have a common range. One way is to standardize them. This will make the optimization algorithm easier to converge.
- Debugging: The best graph to check the convergence of the algorithm would be to plot the value of against the number of iterations. A decaying function is to be expected.
- Choice of : If is too small it will lead to slow convergence, while if it is too big may not decrease on every iteration or even diverge. Choose different values for , as in to get a feeling on what works in your problem.

**Polynomial regression**

Polynomial regression was briefly mentioned, basically showing that you can use non-linear functions of your features as a feature in order to capture non-linear trends using a linear regression model.

**Reference:**

– Andrew Ng’s Machine Learning course at Coursera

**Related posts:**

– Third week of Coursera’s Machine Learning (logistic regression)

– 4th and 5th week of Coursera’s Machine Learning (neural networks)