Udacity and Coursera – Week 1

I’ve just started taking four of the new batch of online classes being offered. Udacity’s CS101 and CS373 as well as Coursera’s Model Thinking and Software Engineering for Software as a Service which have also just gotten started as well. In my previous post, I compared the Coursera and Udacity platforms. In this post, I’ll address the classes themselves.


CS101 is an intro level computer science course taught by David Evans, a Professor of Computer Science at the University of Virginia. The material requires no programming experience at all. I am clearly not the audience for the course. Despite that, I have decided to take the course, though solely for the contest at the class’s end. Because most of the material is so elementary, I’m skipping through most the lectures and just completing the quiz and homework questions. It’s good practice to get my Python syntax more comfortable. Some of the questions even a little challenging requiring some genuine thought and understanding. For example, recognizing that (”)[0] will cause a run-time error in python.

As of now there isn’t much more to say, but I’ll update with more information as the level goes up in future weeks.


Sebastian Thrun is teaching this course on how to program a robotic car. This is clearly a topic that excites Thrun, and not just because of his involvement with projects such as Stanford Racing Team’s car, Stanley and Google’s self driving car. Thrun’s excitement for this topic comes across clearly in his lectures and makes the whole process interesting for the student as well.

This first unit was titled the Basics of probability: Car localization with particle filters, so as you can imagine the material covered was almost identical to material covered in Thrun and Peter Norvig‘s Introduction to Artificial Intelligence except for one notable exception: the addition of actual programming exercises. And this made all of the difference. I felt like I had a pretty good understanding of particle filters and Bayesian probabilities after taking ai-class and ml-class in the fall. However, applying my knowledge in a programming exercise pushed my understanding further. I’m also happy that the exercises are using Python as I’ve been interested in learning it for quite a while. It’s good practice getting comfortable with the syntax, though I keep pumping out code that looks like this:

for (row in p)

Notice the missing : and added ;

I’m looking forward to the rest of this course. Sebastian Thrun, Thank you for leaving Stanford and starting Udacity.


Scott E Page is teaching this class. He’s also a professor at University of Michigan. So far, his explanations are clear and illuminating. It’s still early in the class, and there haven’t been any assignments, but already you can tell that he really cares about this subject and has put a strong effort building a well organized course with a clear plan and direction. The videos are 8-15min each, and they are each well organized around a single topic. There is also additional reading material given in the syllabus that I have yet to look at. I’ll review that later. The course seems to be directed at humanities. Page must be in high demand at Michigan.

So far there have been four units. In the introductory unit he presented some interesting evidence in favor of using models to help make decisions. For those not following along with the course. It boiled down to: people who use models make better decisions than people who don’t, people who use many models do better than people who use only one model, formal models do better than people, and using lots of formal models does better than a single formal model. Page then gave a bunch of example of models used in the real world. My favorites were determining the authorship of the Federalist Papers and the much cited Monty Hall Problem.

The second unit discussed Segregation and Peer Effects. Again Page presents a number of simple examples highlighting surprising results that result from simple models. The one example that stood out was a very simple model, Schelling’s segregation model, which showed how a city could become highly segregated with only a small preference for similar neighbors. While listening to this lecture, I thought of the possible way that the simplifications of model could be distorting the results and of some ways we could expand the model to investigate alternate scenarios. I guess you could say I was Model Thinking.

The third unit is called Aggregation. The two standout example from this? Conway’s Game of Life and Stephen Wolfram’s A New Kind of Science. Both are worth a look or review if you’re already familiar with them. The main theme of this section was how complex behavior can come out of an initial set of simple rules.

Unit 4 on Decision Models covered ground that anyone familiar with Machine Learning would have already learned more in depth. Though it can always be helpful to hear a topic again from a different perspective. For example, here is a poignant quote that came out of this section, ”We don’t want the models to tell us what to do. We want the models to help us make better choices.”

Scott E Page, so far so good. Though, one thing that I noticed, your talking speed fluctuates. It is only an issue at 1.5X when some of your speech becomes too fast to understand.


The Coursera Saas course is being co-taught by UC Berkeley Professors Armando Fox and David Patterson. The first two sections have been mostly a 10,000 foot view of the very basics of software engineering processes (waterfall, agile, …) and SaaS. There was plenty of talk about tcp/ip, dns, A lot of web 2.0 buzz words. 76 PowerPoint slides. For someone who has worked in a number of companies included strict SRUM Agile, Waterfall, and agile with a lowercase “a”, this has been pretty basic and dry material.

However, I am really looking forward to the assignments in this course, perhaps more so than any of the other courses. The excitement mostly comes from this sentence in the Overview, “Those [sic] submit homework 1 and receive a passing grade will receive a coupon good for 100 hours of small instances of EC2 for use on the remaining homework assignments plus a coupon to upgrade their free GitHub accounts to a Micro account (both good through the end of course).” Ruby on Rails, GitHub, Amazon Web Services EC2 and S3, … I’ve always wanted to be a hipsterhacker.

Seriously though, this class has some potential to be very interesting, for now I’m withholding judgment.


That concludes Week 1 in my online learning review. I plan on posting regular updates on the courses and my engagement with them. This whole online learning phenomenon has really taken off in ways that we only dreamed of a few years ago. In closing I just want to thank Khan Academy. They were the first online learning site to really take off. Salman Khan, the founder, finally found the winning recipe, and he deserves much of the credit for kick starting this phenomenon. Before Khan Academy, the state of the art in online learning was MIT’s Open Courseware.

Thank you Salman Khan.

This entry was posted in Data Analysis, Machine Learning, Online Learning and tagged , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *


HTML tags are not allowed.