Tuesday, April 3, 2012

Inductive reasoning, and why you should care

Inductive reasoning, according Wikipedia:

Inductive reasoning, also known as induction, is a kind of reasoning that constructs or evaluates propositions that are abstractions of observations of individual instances of members of the same class.

That probably conveys absolutely no value to you. The best way to understand what inductive reasoning is, is to apply it.

In order to do that, we need some kind of problem which we can solve using a bit of induction.

I'm going to a pick a math problem. Why, you say? Math is almost always the easiest way to understand reasoning, since math's primary device of "progress", so to speak, is reasoning.

So, here's the problem:

The plane is divided into regions by drawing a finite number of straight lines. Show that it is possible to color each of these regions red or green in such a way that no two adjacent regions have the same color.

If at first this doesn't strike you as a math problem, you probably haven't met this chap called graph theory, or maybe haven't been introduced to bipartite graphs.

So, how do we go about solving this?

Before reading on, try out some cases; one line, two lines, ten lines, and so on. Try and figure out some patterns.

Now, I'll show you the solution.

First we consider one line. So, we can color the regions on opposite sides of the line opposite colors (i.e. red and green), and, we're done.

So, what happens when we add one line? Think about this for a moment. Can we try the same trick?

Yes! What we can do is take one side of the new line we just put in, and swap all the colors on that side (so, red goes to green, green goes to red), and, we get a complete solution! So, every time we add a line, we can simply follow this algorithm and get the correct coloring. Of course, this isn't a fully rigorous proof, but, it shows what we're after.

Inductive reasoning is taking a method you used to solve a problem for a set constraint (we knew how to do it for one line), and then extending that method to solve the problem without the constraint.

If write code, you probably do this all the time (and just didn't know what this was called). If you've got a function that's misbehaving, you pass in a couple of values that you think are forming an edge case, and you've noticed that its going wrong for *one* of these values, you change the function definition so that the errors don't happen.

But, you should use this kind of reasoning much more often. In nearly all problems where you are stuck on something going wrong on a constraint, try inductive reasoning. Of course, none of this actually happens consciously, but, if you try doing it consciously a couple of times, it "just happens" eventually.




 

The Travelling salesman problem


The travelling salesman problem is possibly the most famous and probably the most easy to understand of the so-called "NP-hard" problems computer scientists (and mathematicians) have found.

The problem statement is quite simple.

Consider you are a salesman, and you have, say, an $n$ number of cities you've got to cover by the end of the day. But, it would be best if you could minimize the distance that you have to travel between all of the cities, and, you know the coordinates of all the cities in reference to some point known as the *origin*. Given these coordinates, define an algorithm that would give you shortest possible path between the cities.

Of course, this doesn't seem like much of a problem at all when you first start at it (like most other difficult problems); you could just try all the routes between the cities, find the route with the minimum distance covered, and, you're done!

Let's see what that entails in terms of performance/scaling. With $n=5$ (i.e. five cities to visit), we have a total of $5! = 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1 = 120$ ways we can trace out the route between the cities, and we have to check all of these. That doesn't seem too bad for a computer.

How about for $n=10$? Well, $10! = 3,628,800$, so, its quite a big jump from $120$, but, still manageable.

Let's go up by another five; $n=15$. That gives: $15! = 1,307,674,368,000$. Holy cow. That's a lot. But, how did it jump that quickly?!

To answer that question, we'll need a bit of math (if you understand big Oh notation, you can skip the next couple of paragraphs).

Consider this situation. You have one car that's accelerating from a low speed, whereas another car is going at a higher speed, but, the second car's speed will not change. What do you think will happen if we let the cars go on forever? Of course, the car that's accelerating will overtake the car that's at a constant speed, right?

That's what scaling algorithms is all about. You might have one algorithm that performs better at lowever "sizes" of the input, but, a second algorithm may overtake it if it can "scale better" (i.e. accelerate). So, when we say that an algorithm is $O(n)$, what we mean is that the algorithm performs *linearly*, i.e. for each unit increase in $n$, we get a certain unit increase in the time spent (or, whatever else you're measuring). Similarly, $O(n^2)$ means that for every unit increase in $n$, we get that much of change, but, squared in output.

But, all of this still seems on some shaky ground, so, let's break out the equations (warning: if you don't know how limits work, skip this). This won't be a *completely* rigorous argument (which would require more math), but, it will be much better than the argument we have in place.

How can we say that a linear time algorithm (i.e. $O(n)$) will scale better than $O(n^2)$? Well, what we're informally saying is that the $O(n^2)$ grows much quicker (i.e. worse) than $O(n)$ as $n \to +\infty$. If you remember that bit about relative rates of growth from first semester calculus; $\lim_{n \to +\infty} \frac{n^2}{n} = \lim_{n \to +\infty} n = +\infty$

So, we've proved that $O(n^2)$ grows faster than $O(n)$, and therefore performs worse than $O(n)$.

What does that mean in terms of the traveling salesman problem? The traveling salesman problem brute force solution is in fact $O(n!)$ (since the number of routes goes up like that).

Why is this so bad?

Consider a polynomial time solution (i.e. $O(a_n x^n + a_{n-1} x^{n-1} ... + a_o)$). Using a bit of calculus/limits, we can say, for certain, that factorial algorithms are much worse than polynomial time solutions as $n \to \infty$.

That's what makes the travelling salesman problem interesting, because it has a non-polynomial solution, and, its something called a NP-Hard problem. What NP-Hard means is actually very complicated and heavily theoretical, but, it consitutes a very important part of the P vs NP problem, which, informally, asks (quoted from Wikipedia): "whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer" (s/quickly/polynomial time/g).

So, that's what the travelling salesman problem is, and why people care.

Saturday, March 31, 2012

The new Blogger update ... sucks.

I've been using Blogger to host this site, and, frankly, its probably been one of my worst experiences with Google software, ever.

First of all, let me cover the the good things about it (i.e. why I chose it in the first place). Its very quick to get started with a blog, its easy to add AdSense if you want it, and that's about it.

What's wrong with it?

First of all, the widgets model is broken beyond all belief. All of the widgets that are provided are pretty much useless, and most things like syntax highlighting and LaTeX involve tacking on bits of Javascript onto the page, which Blogger doesn't mix with all too well. 

Secondly, the editor sucks. We're in 2012 here people, and it STILL escapes out when I hit tab. Why is this so difficult? Isn't Blogger now developed by the same company that makes Google Docs? 

Then, once Google+ came along, all of the other "social" buttons have been pushed off to the side and made quite tiny, which, is understandable when considering it from Google's point of view, but, from my point of view, it doesn't make much sense.

The themes are also far from amazing (http://www.bloggerthemes.net/ does try to remedy this), and the selection in font is pretty much non-existent. 

And, most of all, Blogger, in general, seems to try to get in your way every single time you want to just "write a short entry", or "get this article on the internet". 

So, what did the update fix/add?

It added some analytics directly on the front page; I was surprised to find out this blog had over 50,000 page views in the past 2 weeks. 

Also, it cleaned up a editor a *tiny* bit (it still escapes out on the tab), and, the main replacement was just making the editing textarea/contenteditable larger.

The design is also much better and clearer.

But, they still completely ignored the actual problems that plague Blogger; so, I just can't keep waiting any longer, I'm going to try and pick from the enormous assortment of options (Posterous had caught by eye, but, its been acquired, so, I don't want to risk anything on it). 




Sunday, March 18, 2012

Experiences in Go-ing

I've been messing around with Go for quite some time now and I wanted to share my experiences.

When I first looked at Go, I put it aside as "just another language", and moved on with whatever I was doing.

I mean, there are languages popping up every day that are worth nothing whatsoever because the ideas they bring to the table are too few to take the risk of using a language that isn't popular (less documentation, less support, less everything).

That summer, I began writing some more low-level C/C++ code to implement a Bayesian classifier stock recommend-er thing. It was a complete disaster.

Just some basic string parsing was a nightmare; C gave no utilities whatsoever, and C++ was good, until I hit an error and it gave me a 20 line long error report, and, I had to use all kinds of specialized templates for the simplest of things!

And, the worst problem I faced was of sockets. Non-blocking, multi-plexed and cross-platform socket support with C is basically non-existent (unless I wanted to use libev or libevent, which has documentation scattered across the internet in small chunks). With C++, there are many choices, such as Boost.Asio (basically no documentation), ACE and ICE (this one I'm genuinely excited about, but, I hate their specialized object format crap I have to deal with).

And, of course, there's no language support for anything so if I ever wanted to distribute my code, the client would have to have the libraries.

I couldn't sacrifice performance (lots of number crunching was involved with costs tight), so, I couldn't pick Python.

Then, I came back to Go, and looked at it again from a different perspective, and hoped it would offer me something that could rid me of all of this.

I never did in fact write the Bayesian classifier thing (the idea wasn't much good anyway), but, the project introduced me to Go, which, I must say, is an amazing language.

The first thing I immediately noticed is that they got rid of the parentheses in "if" and "for" statements. Coming from Python, I really like this.

And, here's a language that's FINALLY SUPPORTING UNICODE FROM THE START!

Closures are supported and functions can be passed around like salt shakers, its wonderful!

All of the language seems well-thought out and cohesive; there aren't really any points that I felt like didn't match with the rest of system. Its very much like a suit without any stitch marks on it.

As for the standard library, its no short of awesome. It includes a lot of things that I've frequently wanted with C/C++ as part of the runtime, such as json, xml, http, a cryptography package, even websocket support built right in.

And, when they say it feels dynamic and interpreted, they actually mean it; the type system steps out your way for most of the time.

The only thing I find lacking is that there are no books on the language as of yet, but, I expect that will be remedied once the language is "marketed" a bit more (Google hasn't put a ton of weight on it ... yet).

It also has support for RPC which makes writing distributed computations really easy and quick.

Unless I have to write some really low-level code, I refrain from using C/C++ these days and instead reach for Go; about the same speed with half the development time.

I really encourage you to go check out Go, and just play around with it; you might start using it all the time.

Monday, March 12, 2012

To all the people who say programming competitions are useless

There's been this general vibe on HN and r/programming that programming competitions (that involve problems one must solve) on the whole have little to do with programming in general.

I beg to differ.

99% of us can crunch out Javascript and some backend language pretty quickly; some ajax interaction, maybe some form validation and some Twitter and Facebook APIs. And, a lot of us can quickly learn new things when we need to. Need to learn Scala for next project? No problem. Client wants to use the Face.com API? Alright, let's do it.

But, that's not the only stuff that matters.

Your skills are *really* tested when you hit a roadblock that isn't covered by the abstractions you're working on.

For example, the database inserts aren't happening fast enough, and the company doesn't want to buy any more servers to fix the problem, so, you've go to optimize it.

Programming competitions don't test whether or not you can go about updating portions of code, they test whether you are able to write code to solve a problem you've likely never heard of under a time constraint.

And, that's what companies would like in programmers.

Math based problems are just one way of approaching timed problem solving, and they work quite well, since many of us are well versed in at least basic math, and are able to get our hands around the problem so that we can begin to think about implementing it.

So, programming competitions *do* have to do with programming and the ability to solve problems, which is what development is all about.

Friday, February 24, 2012

BIOS primer

Most of us know what a BIOS looks like, and have a some bits and pieces of an idea about what its supposed to be doing. This needs to rememdied, so, read the rest of this article!

What happens when you hit the power button on a computer?

It doesn't actually go directly to the CPU, first, the BIOS code is loaded, because that's what's able to hoist up the CPU, hard drive, display, etc.

The BIOS is contained on an external chip (that's why some of you see your motherboard's manufacturer's name when you see the BIOS).

A BIOS not only lets you set the boot order (which was was most of us have used it for), it has a couple of jobs.

First of all, it configures your hardware.

Some hardware is dependent on others, has specific settings, etc.
All of this is handled by the BIOS, so that is ready for the bootloader.

Also, all (well, nearly all) computers have a system clock, which doesn't actually tell you the "time" (again, it could), it counts ticks from a certain date, such as the epoch. The BIOS sets this up.

It also selects what devices it can use as bootable, because there are certain types of storage devices that a BIOS cannot load of off (for example, random storage, volatile memory), and it identifies which are bootable.

Then, comes the main job of the BIOS. It calls a bit of code that resides on the selected storage devices on the first 512 bytes of the first sector.

In the early days of computers, 512 bytes was enough code to load the operating system, and things worked wonderfully.

Of course, this is no longer the case considering that the Linux kernel is almost 15 million lines of code.


So, these 512 bytes (known as the MBR) usually call *another* piece of code which then loads your operating system. Or, it could hold a list of sectors on your hard drive from which another piece of is then called.


The BIOS also provides a small API for the MBR to use to write to the screen, make some interrupts, etc.

Its pretty cool stuff...



Sunday, February 19, 2012

C Programmers: Don't write macros

Take pretty much any large C project today.

If you look through a couple of files, you'll notice some things are written in CAPS LOCK AS IF THEY'RE YELLING AT YOU.

These are macros.

They're not as powerful as Lisp macros; they just replace themselves with a bit of code.

C Macros are actually quite nice if they're used in moderation, because they allow you to type less.

But when you have sixteen macros in one line, you really need to stop.

Please.

So, this is my plea to all of you C programmers, please stop writing more macros, because you're going to regret them when you have to go hire a new guy and he'll spend 19 weeks trying to figure out what all the YELLING ACTUALLY MEANS.




Monday, February 13, 2012

My stance on JVM Languages

There have been JVM languages popping up all the time; there's Scala, Clojure, Groovy, etc.

These are all excellent languages, without a doubt they are. I use them all the time, and I have come to enjoy them a lot.

But, sometimes, I wonder about the very long-term (well, at least in the software world) consequences on using these languages.

Java, as many of us consider it these days, is an "uncool" language. It is too verbose, too complicated and has too much over engineered crap that no one uses.

It lacks the simplicity of something like C (notice I did not say C++ :P), where the manual for the ENTIRE LANGUAGE was a hundred something pages long.

But, there is a ton of existing code in Java that no one wants to let go of, because people spent so many years trying to develop that code, and it took take many more years to replicate it in another language.

So, people created JVM languages which run on the Java Virtual Machine, and they can interface pretty easily with Java. Then, you can write all your new code in a fast, hip language like Scala, but, you can still keep all of your legacy stuff around, and you don't have to rewrite a thing!

This works wonderfully, except...

Java, in my opinion, is slowly turning into another COBOL. Heavily used in large, high capital industries where a change is made every few centuries (roughly speaking), but, not used much elsewhere because its just too obsolete.

So, if one creates languages on top of such a language, you're still hooking into the same old architecture. That's like writing a language that compiles to a COBOL virtual machine, people would laugh at you.

Does this change anything for us, right now?

Absolutely not.

The term we're talking about (for Java to turn into COBOL), is just so long, that most companies won't even live to face the day.

I'm still going to keep using JVM languages, because they're just plain awesome, but, I just wanted to put my thoughts out there.

Sunday, February 12, 2012

Graph databases explained quickly

Graph databases seem to be pretty important these days, with a social network popping up every time someone says "Facebook", so, I decided I'd do a small article on them.

Take a traditional, SQL database.

What's it good at?

Its good at specifying tables in which specific types of data lie.

Okay, what's it bad at?

Well, a lot of things, one of which being that having one piece of data "linked" to others is a complete pain.

What does that mean?

If you're a person, and and you have 500 friends, you need to be linked to 500 other people, all of which, in turn, are linked to you AND a bunch of other people.

Now, let's consider a graph database.

In a graph database, there are things called nodes, properties and edges.


Nodes are close like objects/structs to the OOP people, in that they are entities that can represent people, accounts, etc.

Properties are information that nodes have, for example, if one of the nodes was "Person" its properties might be "Name", "Age" and "Address".

Edges are the things that connect the nodes together, which are really the most important, since in a graph database, it matters a lot more about how the nodes are related than what the nodes actually contain.

Hopefully that made sense, if not, drop a comment below!

Follow if you liked it :)



Saturday, February 4, 2012

What's referential transparency?

You've probably heard about it before from all those crazy functional programming people, but, you don't *really* know what it means... I'm here to fix that!

Consider a procedure/method written in an imperative language.

In such a procedure, the program can do anything it wants! 


Which means it could multiply two numbers, crash your car, and then save those numbers to a database.

The things that it does that affect the outside world are called "side effects".

In a purely functional programming language, side effects don't exist.

But, in "almost" purely functional programming languages (in which you can actually do something practical stuff like writing programs that do things), the things that have "side effects" are confined to a couple of functions that interact with the outside, but, the rest of the functions (called pure functions) are entirely separated from the outside world, which means that they can be executed anywhere can we can expect the same results.

Consider math for second.

Mathematics is a purely functional language, because there's no way it can affect the "outside world" (atleast, mainstream math can't).

Say you define a function called $f$:

$f(x) = x^2$

Now, what do you get when you compute $f(2)$?

$f(2) = 4$

Okay, so, what's so important about that?

That simple statement means that no matter what the rest of the world is in, if you give the function $f$ a value of $2$, you can bet the farm that it will return $4$.

The same holds true for pure functions in functional programming languages!

And, that's what referential transparency means.

Basically, we're saying that if we plug in some value $m$ into some function $f$, we can always expect to get the same value back ($f(m)$), regardless of the state of the rest of the stuff we're dealing with.

Why's that useful?

First of all, that's a heck of a lot an improvement for compilers/interpreters.

If you're finding the value of $f(2)$ 40 million times, it will (usually) only find it once (this also has to do with how parse trees are replaced when parsing functional programming languages, but, the basic idea is here).

This also lifts the idea of code running sequentially, because it can be run however it would run the fastest.

Referential transparency makes code a lot easier to reason about and debug, since you KNOW that something will return a certain value when it receives a certain set of values as input.

There you go, you should hopefully understand referential transparency.


Friday, February 3, 2012

Functional programming explained

I've noticed a lot of confusion over why functional programming languages seem to be what the cool kids use these days, and why they can actually make a difference in your productivity as a developer, and I hope I can clear that up with this.

Consider a standard, run-of-the-mill imperative language, like Java, or C.

Your procedures or methods typically look like this:




Basically, you're taking two numbers as arguments, adding them, and returning them, and saving the result to a database somewhere in the middle.

Not too shabby, right?

Consider this situation. For some reason (your application isn't working), you need to check if this particular method is working.

What's the problem? Well, you can't test this method separately, because it needs access to the Database class, which needs access to about 6 billion other things!

The ability to test things separately from others is called "decoupling".

Another issue with imperative languages is that they can modify all kinds of state in one function!

So, in effect, when you call the function addNumbers, it could delete your database, empty your bank account and draw a circle on the screen. Why is that a problem, you ask? Well, that's a pain to debug, because you have no idea what types of functions can modify what types of state!

Another issue are global/static variables, which cause no end of problems for Java programmers who find themselves changing static fields in certain objects affecting all the others in adverse ways, and this takes a ton of work to weed out.


How does functional programming solve these problems?

The first thing to understand about functional programming is that in a completely pure functional programming language, there is no way you can modify external state.

That means you can't print to the screen (that's changing the state of the screen), access the web (changing socket state), or anything particularly useful.


But, this wouldn't be terribly useful since you wouldn't be able to get any output shown, so, it would be practically useless (people get around this by writing languages that printing every calculation to the screen as a part of the language implementation, but, this just creates more confusion).

However, that's not to say that they aren't used. An example of an ancient and hugely successful purely functional programming language is mathematics (there's no "print" command in math).

So, languages like Haskell use this special idea (borrowed from mathematics, see my abstract algebra posts) called a Monad to "mark" special functions that do modify state so that they can be separated cleanly from the rest of the program.

This nearly kills off the decoupling problem!

Because there are very few non-pure functions in a typical functional piece of code, everything else IS decoupled from the environment!

This is heaven when you need something debugged!

And, this cleans up what functions can do and can't do; every pure function MUST return a value, so, you can expect something in return, not just a void (because that would mean its changing state, is therefore not a pure function), so, if you try to write to the database from a pure function, you can't do that, which keeps your code squeaky clean.

Usually, functional languages come with malleable and nimble static typing, which means that you don't have to write out the types when you're declaring functions, but, they're there, so, when you try to pass a String to a function that wants an Integer, the compiler will tell you.

All of these restrictions make it much more difficult to compile, but, once it does, it sure as heck won't crash (it might not do what you want it to, but, that's what unit tests are for).

In functional languages, variables aren't really variables in that they can't be changed.

You're probably saying "WHAT?!! How am I supposed to implement a counter then?!", and that's solved entirely by the fact that you can use recursion.


So, that's why functional languages are awesome, and you should go check out Haskell!



Thursday, February 2, 2012

Why is server configuration so annoying?

A lot of programmers these days are switching over to dynamic languages like Python and Ruby, mostly for their ease of development.

Sure, it entails that one must make sure that you aren't passing the wrong types to wrong functions, or you won't find out until runtime, but, if you're willing to take that risk, dynamic typing can be amazing to speed up your work process.

And, just like languages, nearly every other aspect of computer science/engineering has evolved for the simpler, more elegant approach, and, in this shift, many have profited.

For example, Twilio's business model is based entirely on the fact that building phone systems in a modern programming language is just much too difficult for an average programmer, and, by simplifying the process by which developers can take advantage of phones, they've made quite a company. 

And, people work everyday  to simplify things that were previously immensely complicated, into black boxes in which the details are abstracted away.

As another example, take the humble Python list. It seems like a pretty simple idea; you can push and pop things, and you can delete things from the middle, you can sort and search them.

But, implementing a data structure like that in C is a formidable task, especially with all the optimization work that the Python folk have put into it. 

And, things like this have lead to an explosion in Python's popularity.

But, I see one area where this type of stuff is immensely lacking, and that's server configuration. 

First of all, why on Earth does Apache still use XML?! Is there literally no other format they can use?

Then, SSL shouldn't be so difficult to set up. It should take, at most, one shell script, or ANYTHING like that that'll generate a basic SSL virtualhost. 

There need to be sensible defaults for log-files per vhost and so on.

There need to be macros that we can use to have chunks of configuration with variables plugged in.

And, in general, a lot of things need to be abstracted away into more usable and easy components.


That's all I've got to say, but, do tell me what you think in the comment section below.

Please follow if you liked this (top right). 

Wednesday, February 1, 2012

The IRC mentality

In recent years, more hacker forums have sprouted up, moving away from the traditional "somebody asks, a bunch of people reply, and then there are replies to that", to some newer ideas.

For example, there's Stackoverflow.

StackOverflow is an excellent idea.

By introducing gamification into the idea of answering other people's questions for free, they've changed the entire scene of programmer forums. This causes people to think twice (well, for the most part) before posting useless questions, and the same for question answerers.

This works with amazing efficiency; ask a fairly easy to understand question on StackOverflow (SO), and you'll get an answer within a couple of minutes (unless its something incredibly unpopular like running ARM assembly on a MBR), and the answers are immediately filtered out because the upvote-downvote system.

As a added bonus, the interface is clean, succinct, and clear.

Frankly, its just all out awesome.


Coming from places like Daniweb (ads above the fold; aarrgh), its a welcome breath of fresh air.

But, that's not where I started.

I started on IRC.

IRC is facing popularity issues these days; with all the new solutions for collaborating (for example, Stripe uses Campfire by 37signals).

I'm not going to talk about how IRC is outdated, and, I'm not going to tell you that its the only "real" way to chat with people, because that's your decision.

But, I will tell you about the community around IRC.

When you're a beginning programmer, and you're starting to walk around the intricacies of a language, and you're faced with a problem you can't solve with, say, 10 minutes of work, what do you do?

You try to ask for some help.

Consider two scenarios, one on a website like StackOverflow, and one on IRC.

You post it on SO, and you get a response in five or six minutes that tells you how to solve the problem, and tells you what you're doing wrong, and you're able to get your code working, you hope that the person who gave the answer has a nice dinner, and move on.

Second, you go onto IRC, and you ask the question. Someone comes to your assistance. Then, the first thing they usually tell you is to check the "docs", and if you don't know what this is, they'll point you towards where it is. You search around for 10 minutes, find something you think might be right, and the person helps you get it working. But, he just gives you a few hints like "see, you're trying to pass in a string right there, but, $num is supposed to be a number", and at the end of an hour, you've finally solved your problem, and you feel amazing.

What would you pick?

I would  pick the IRC experience.

It teaches you HOW to go about solving problems in general, not just the solution to a particular problem that you're facing right at that current moment (well, that too), and you get a much more deep understanding of the problem you're facing, and you'd be apply the same knowledge of "check docs, figure out function, pass in right types" to hundreds of other problems you'll face pretty soon.

Now, this is for a new programmer.

If you're an experience programmer, and have been trying to solve something for 4 hours without success and think you're missing one small quote somewhere, or you're getting a NULL pointer, you'd be better off on SO, but, for new programmers, IRC is important.

That's what I think, do tell me what you think in the comments section below!

Follow if you like it (top right).

Monday, January 30, 2012

The education system is broken, and here's how to fix it

As most of you know, I'm a high school student (freshman).

This year I complete my 10th year of being a full-time student (including kindergarten), and, I must say, I am sick and tired of the way people are "educated".

Take your typical mathematics class (my favorite subject inside and out of school) in the United States.

The teacher starts by checking the night before's homework (most of the time checking if people have done it), asks people if they're having any difficulties with it. If so, then he/she does that problem on the board and explains it. Then, introduces a topic like "rational functions", "imaginary numbers" or "the Pythagorean theorem", from the "textbook" which has a few examples in it.

The lesson usually consists of beginning by showing the formula for that chapter or section or unit or whatever the heck its called, and then having the students see a simple exercise in which they plugin a few numbers into this magical formula which then spits out the answers. 

Then, the teacher then picks a more "difficult" problem, in which the same formula is used, but, its a "real world application", in which students find things like the distance between two campsites or figure out how long it will take Bob to run around the track if he's going at so and so miles per hour.

And, the homework is handed out, which are exactly the same problems, just redressed with different numbers, and the cycle begins once again the next day.

So, what do people learn at the end of the chapter?

Absolutely Nothing.

Here's the problem.

First of all, the students are never given the idea about what how the formula actually came into existence. 

Basically, the current system dictates that it doesn't really matter where it came from, for all your concerns, someone just thought about it one fine day and wrote it down, you have a test coming up next week, so you'd better stop getting distracted and get the formula memorized or you'll fail.

So, this causes students to go about the "textbook" skipping everything but the formulas, and then memorizing those. Then, when the test comes along, those who had time to memorize their formulas do excellent, and those that had something going on get low grades.

Then, as soon as they're done with the test, they put in all of their efforts into memorizing the next set of formulas and have nothing left from the last set that they memorized except "I love *whatever the topic is*, I got a 96% on that test".

And, the "textbooks" have "real world applications", which involve doing things that people would never even DREAM about doing (there are an incredible amount of problems about outdoor activities), because they're just plain stupid (do you measure the distance between the department stores in your town with the Law of Cosines?). 

So, this causes students to think "when am I going to use any of this stuff anyway?", which is something I would think about very often if I hadn't taken the time to read a bit more about math.

These facts need to change. 

First of all, you might have noticed that I keep saying textbooks in quotes, and there's a reason for that. 

The textbooks suck.

Pick up any standard, run-of-the-mill textbook about "Algebra II" or "Precalculus", and you'll see a very similar picture.

The chapter begins, promising incredibly lackluster "real world applications". Then, goes into the first section, in which it tells you the formula in a conveniently highlighted box, which basically tells students not to read the rest of the passage, since its only there to make the book fatter anyway. 

Then, there are pictures of people doing sports and things considered "fun", and are related to the subject of matter extremely lightly, but, the publishers and authors assume that if you put in some pictures about sports, the students will "learn better" (a.k.a memorize formulas in a shorter amount of time). 

Then, there are a few examples in which the formula is used (again, no explanation of where it came from, or why it is needed), and ends with a horribly stale application, and then commences onto the exercises.

There are 110+ exercises in each section, about five of which are actually worth doing, since all the others are just slight deviations from the examples. 

At the end of it all, you haven't solved a single problem by thinking it out yourself, all you've done is plug in some numbers.

The biggest thing that these textbooks and curriculum lack are real problems. 

Right now, almost every high paying job (with a good success rate; e.g. you cannot count football players, since less than 1% of them make a high pay) in the world deals with solving problems.

Developers/Engineers?  
We solve general problems like "An easy way to share pictures is needed; we have to build it", but, we also solve problems like "The search results are taking way to long, is there some kind of way we can speed them up?"

Obviously, these are difficult problems, and there's no general "formula" to plug things into that'll spit out the right solution. 

We have to invent things as we go along.

Doctors? 
When a patient comes to you, you don't have a set method of identifying "he has so-so, and he needs this to fix him up". 

First of all, he gives you about 98 different things that are happening to him, and from there, you need to understand which ones are important and then find out what's wrong with him (assuming that the symptoms you said are important really are important)

You have to take a small amount of information given to you, and synthesize a solution.

Lawyers?

You have a set number of rules you have to follow, and some conditions given with some more information, you need to derive a conclusion. 

All of these things are what the current curricula lack.

There are no difficult problems that one must think about in order to find the solution, they are just things that as long as you don't make any mistakes, you'll be perfectly fine.

This makes students entirely neglect the point of mathematics!

The point of math isn't to find out the probability of Jimmy wearing red socks on Tuesday, or to memorize the Rational Root Theorem word for word, because you probably will never use those things in your entire life, but, the point of math is to show how clear headed thinking and problem-solving are executed.

Given a difficult problem, one must invent, synthesize and derive in order to find a solution, which may involve failed approaches, but, the result is much more lasting.

And, this cut-and-dried approach to math is useless, because as soon as you take advanced math courses in college, you'll realize that the "proof" sections in the book actually mattered!

Okay, so, I've stated all the problems, now, how about a solution?

First and foremost, get rid of the "Algebra II" and "Geometry" subdivision junk that's going on, its entirely useless. 

Math wasn't invented like that, and therefore should not be taught that way. 

Mathematics isn't a bunch of blocks of knowledge that you use parts of to do exercises, its this whole continuum where a problem, if not solvable by the methods in one field, can be moved to another.

Second, the textbooks are horrible and need to be rewritten.

To do that, we begin by placing a limit on the size of the textbook to 300 pages or so (sounds a bit Legalist, no?)

Then, we take out the horribly expensive and useless color pages (except for 4th grade and below; I really liked the colors then, because the textbook didn't really mean much), and replace them with black and white.

Since its all black and white, we'd might as well throw out the pictures of people surfing as well, since people draw mustaches on those anyway.  

Then, we rid ourselves of the idea of sections, since they get in the way anyway and constrict our overall "view" of mathematics too much.

Now, each chapter deals with each topic in a very conversational way, where the motivation of each step (all proofs included) is shown, and how one step leads to the next is very clearly shown.

There is one textbook that is currently in use that I would like to point out that does this incredibly well, which is Linear Algebra and Its Applications by Gilbert Strang. 
Matrix multiplication isn't just thrown out as a formula, but, its developed conversationally from the idea of linear systems (certain higher level textbooks do much better in this regard). 

And, the topics within a chapter are connected, one cannot just jump, everything is continuous. 

However, chapters can jump around math as much one would like, from Number Theory to Geometry and then to Combinatorics. 

All the exercises need to replaced with a few, well-selected sets of problems which not only extend the material, but, provide new insight as to how to take a simple idea and use it on a complex problem (Pigeonhole principle-esque stuff). 

Then, the section of textbooks would be quite solid.

Teachers need to be retrained, and paid a heck of a lot more than right now (think six figures), to make it a much more competitive field. Teaching needs to based around problems, starting lessons with problems, and then allowing the students to have a go at the problem, giving hints in between to lead them closer and closer to the solution. 

If we are able to make a large portion of these changes, we would be far better off than right now in terms of what students can take away from mathematics classes, regardless of what they study in the future.

That's my idea of how this would work, and I would love to hear yours, so, please drop it in the comment section below. 

If you liked this post, please tell me about it, so I'm encouraged to write more, and please follow (there's a button at the top right). 

Questions, comments, etc. all welcome!




Monday, January 23, 2012

Rotation Matrices for developers

Hi!

So, why the heck would you care about Rotation matrices?

Well, a rotation matrix is a "easy" (once you're used to it) way to specify how to rotate a certain plane with reference to the origin, so, if you're doing any kind of graphics programming (WebGL is all the rage these days), you should know this stuff pretty well.

First of all, what's a matrix?

A matrix, despite how complicated it sounds, is JUST a collection of data. Developers know that better than most, since many of us use two dimensional arrays all the time, and call them matrices, but, many times we don't really do any "proper" math with them.

So, if its just a collection of data, why's it useful? Well, the idea is that if you are able to lay out your data in a sort of a table form, it makes it much easier to reason about mathematically, because you can consider rows and columns in different ways.

One very important thing to realize about matrices is that if someone gives you this matrix:


$\begin{array}{ccc}
1& 1 & 0 \\
1 & 2 & 2 \\
2 & 3 & 1 \end{array}$

You can't assume anything about it (except for a couple of very unhelpful things, like you can consider it in rows and columns), so, you must have some idea about what the matrix is about. For example, this one could be a group of simultaneous equation's coefficients.

You might have noticed that so far I haven't really explained how a rotation matrix actually works.

Before I begin, you should have a basic understanding of vectors, which are incredibly simple (just an arrow, with a point at the head and another at the tail).

Basically, what does rotation actually mean? It means that we're taking vectors on a plane, and we're moving them so that they represent different vectors (as if we rotated the plane itself).

Now, take a step back, and pretend as if, instead of moving about the vectors, we are rotating the plane in which they reside. A bit like this:


(That took me almost an hour to make, so be grateful!)

Where the red axes are that of the rotated plane.
So, how do the values map out?

What I'm asking is if we take an arbitrary point $(x,y)$ and then we rotate it about the origin by some angle $\theta$, what will the new point be, in terms of $x$ and $y$? Diagrams always help:




As you can see the original point is $D$, which have rotated around the origin ($E$) by a certain angle $\angle FED$ to the point $F$.

So, lets say that $D$ has the coordinates $(x, y)$, and we're trying to find $F$ in terms of $x$ and $y$.

Some immediate conclusions (since $D = (x, y)$):

$DH = y$
$EH = x$

To make our lives easier (mostly my life easier) let's give the angles some symbols:

$\angle{DEH} = \phi$
$\angle{FED} = \theta$

And, since we are rotating about the origin, these must be true:

$EF = ED$
Let $EF=ED=r$

So, considering $\triangle{FEG}$, we can say:

$\cos{FEG} = EG/EF = EG/r  ... (1)$
$\sin{FEG} = FG/EF = FG/r   ... (2)$

Notice that $EG$ is the $x$ coordinate of $F$ and $FG$ is the $y$ coordinate of $F$, which is what we've set out to find. 

And, we notice this:

$\cos{FEG} = \cos{\theta+\phi} = \cos{\theta}\cos{\phi}-\sin{\theta}\sin{\phi} ... (3)$
$\sin{DEH} = \sin{\theta+\phi} = \sin{\theta}\cos{phi}+\sin{\phi}cos{\theta}    ... (4)$

Equating (1) with (3), and (2) with (4):

$EG = r(\cos{\theta}\cos{\phi}-\sin{\theta}\sin{\phi})$
$FG = r( \sin{\theta}\cos{phi}+\sin{\phi}cos{\theta})$

Now, we simplify using the simple idea that $\cos{\theta}*r = x$ and $\sin{\theta}*r = y$,

And, we get:

$EG = x*cos{\phi}-y*\sin{\phi}$
$FG = y*\cos{\phi}+x*\sin{\phi}$

And, that's what we wanted!

You might be wondering: "What does this have anything to do with matrices?"
If you weren't thinking about that, please pretend you were at this point.

So, in the equations that we found above, the only parts dependent on $\phi$ (the angle by which we rotate) are the trig functions, so, what if we just represented them separately and then we'd be able to concisely represent all kinds of rotations on a plane?

Basically, what we're doing this this:

$\text{Rotated vector} = \text{Some witchcraft}\cdot \text{Initial vector}$

So, in a bit clearer terms:

$\begin{array}{cc} x' \\ y' \\ \end{array} = \text{Some witchcraft} \cdot \begin{array}{cc} x \\ y \\ \end{array}$

(where the columns of numbers are vectors, I can't get the brackets to display, sorry)

That $\text{Some witchcraft}$ is a matrix!

And, that's because matrix multiplication is defined in such a way that we can multiply a matrix and a vector together, and get equations resembling those that we got for $EG$ and $FG$!


That right there is the matrix we use (pretend that the $\theta$ is actually $\phi$), and we multiply that with the $<x, y$ vector to get the equations we got for $EG$ and $FG$, because matrix multiplication is defined that way! 

You might think this is all a "rigged game", and it is, but, the amazing thing is, despite this seemingly arbitrary operation of matrix multiplication, it works in all kinds of different places, because it is based one of the most important computations in mathematics, known as a linear combination

And, matrices are used EVERYWHERE!

From analytic geometry to simultaneous equations to economics and game theory, and that's mostly because of the fact that you are allowed to make any assumptions about the data you would like, and idea of matrices allow you to perform on math on that data with a few constructs that applicable pretty much anywhere.

I would love your feedback on this article, and ideas for new ones!

If you like this, please consider following this blog, it gives me motivation to keep writing blog posts, and you'll be notified as soon as a new one comes out.

For the people who like the abstract algebra series, I took a little break from it because I was a bit bored with it, and I couldn't think of the next topic easily. 

I'd be happy to answer questions, so, you can put them in the comments section below (I'll usually respond immediately), or email me at dhaivatpandya AT g m a i l DOT com.

And, just before I close, I'm doing a startup called NimbleNotes, which allows to write productively (this blog post was written entirely with NimbleNotes), and currently is in closed beta, with an IndieGoGo project.

Have a nice day :)


Friday, January 13, 2012

Please don't do dates and times like this!

So, you might have read my last post.

And, that covered only one website, namely http://samsclub.com/.

The problem is much, much worse than that. So, this is what happened.

I downloaded the free Camtasia Studio trial for 30 days, and I really, really enjoyed it. But, like all other trials, its time came, it prompted me to drop some Franklins or it wouldn't let me use it. And, I was about to close out the application (hey; I didn't actually need it), when I thought maybe I would try making my system time go back a bit, and see if that did anything.

And it did.


It happily pretended like it was 1/7/12 instead of 1/13/12, and I could continue to use my trials for another 6 days (and, after that I could repeat the same trick again, or, I could just sent it farther back to begin with). That struck me that maybe a lot of other applications depend on system time.

And, I promise you, there are a LOT of them.

Skype is a big one, if you move your time back, and look at Skype conversations and enter in a new one, it goes back onto old conversations.

So, please, please don't get dates and times from the system! You need to get them from a synchronized UTC timeserver.

Thursday, January 5, 2012

Stupid coding errors

This funny thing occurred, and I thought it might be interesting.

I was watching someone make a monthly payment on some financed item on Sams Club, and was having some trouble.

The problem was, the computer had a incorrectly configured clock, which had the date set as February 6th, 2012 (this took place on January 5th, 2012).

And, Sams Club's website was using that date as the default value for the payment date.

I thought "Okay, that's not too bad, I can probably just change the date on it". Wrong.

Since SamsClub.com was using Javascript (bad idea) to get the date, it was also setting it as the MINIMUM date, which meant you cannot put in a date before that!

Now, what's the problem with that?

Well, say I'm a week late on my payment. To make that payment on time, all I have to do is set my system's clock to one week ago, and Sam's club will happily take the payment as if it was delivered a week ago.

Lessons learned:

Never, ever use Javascript to figure out the current date.

Atleast check the date before inserting it into the database.

Hopefully this will reach someone at Sam's Club, and they'll fix it...

Sunday, January 1, 2012

Abstract Algebra presented in a non-intimidating way (esp. for developers) - cryptography


Hi!

Hopefully you read the last post, and if you didn't, you really should (http://poincare101.blogspot.com/2011/12/abstract-algebra-presented-in-non.html), because I'll be building on that stuff this time around.

This time, we'll be going over an very interesting and applicable subject (if you've ever used a computer, you've experienced it in action), called cryptography, and more specifically, public-key cryptography.

Let's get started.

From the dawn of humans, there's always been one human who wants another one dead, and as time as gone has gone by, we have invented better and better (or, worse and worse) of accomplishing this.

One of such inventions was cryptography.

Communication is incredibly important in war, and Caesar (yes, that Caesar, the guy from Rome you learned about in history class and promptly forgot about after the test) needed a way to communicate with his generals spread out throughout the Roman empire in way such that even if the messenger was captured, the bad guys wouldn't be able to read the message.

So, he (and presumably his team of scholars he kept handy) developed the Caesar cipher. I won't go into a ton of detail here, mostly because I wrote an entire blog entry about it (http://poincare101.blogspot.com/2011/10/encryption-and-ciphers-part-1-this-is.html).

Basically, you shift over the alphabet over the message by a certain amount (that certain amount is called the key) to get the encrypted message, and, the person who wants to decrypt it shifts back the message by that certain key.

There worked for a little bit, but, it involved the fact that the people who Caesar sent his messages to had to be told of the key and Caesar didn't trust them enough for that, so this system was abandoned, and Caesar most likely returned to the time-tested and expensive method of sending a messenger with soldiers armed to the teeth.

People entirely overlooked this (rather important, as we'll see later) problem, and instead concentrated on the fact that the Caesar Cipher was easily cracked with statistical methods (all of this is explained quite thoroughly in the blog post I linked above).

Slowly, cryptography turned into a pastime for amateurs and enthusiasts, and it was not much of a subject by itself, until more and more developments followed, and the subject was looked upon by mathematicians.

One of these new and better ciphers is called the Vigenere (I refuse to take the time to put the accents in the right places) Cipher.
The basic idea was that it no longer kept the frequency of the original text (often called plaintext) intact, making it immune to the statistical methods that could be used on the Caesar Cipher, but, there were other methods it was vulnerable to.

One very large problem was still overlooked: both parties need to transact the key before hand, or this procedure wouldn't work.

Finally, some clever chap came up with the revolutionary (like, really revolutionary) idea of public key cryptography.

The concept is quite brilliant.

Say, you have Alice and Bob and they want to transact encrypted messages. In public key cryptography, both Alice and Bob each have a public key and a private key. 

The public keys are published somewhere that *everyone* can access them (not necessarily only Alice and Bob).
Now, Alice wants to send Bob a message. She finds Bob's public key and encrypts her message with it, and the sends off this encrypted message to Bob, who then decrypts it with his private key.

That means that the key that is used to decrypt the messages is never transacted, and Alice never needs to know it!

That means that if Caesar had used public key cryptography, he wouldn't have faced the problem he had, because his generals would never have to know his private key!

Of course, there's a problem (actually, a few of them) with that (otherwise there wouldn't be a NSA). 
The private key and the public key must somehow be related to each other, so, theoretically, given as much time as needed, it would be possible to derive the private key from the public key, so, the solution to that is to use a computationally hard problem to generate the public and private keys, so it would take much too long to get the private key from the public key for it to be practical.

You might have noticed that I've been quite vague so far; I haven't specified HOW the keys are actually generated, and the encryption is carried out to turn plaintext into ciphertext.

That's where the math begins.

I mentioned that we had to generate the keys with a "computationally hard" problem. What on earth does that mean?!

Its a pretty simple idea, it means that as you increase some parameter, the time taken to complete the problem skyrockets (not a perfect definition, but, it will suffice).

One such problem is called the Discrete Logarithm Problem (DLP, for short).

Basically, you have a prime $p$, and $g, h$ such that both are part of the $mod p$ set (i.e. between $0$ and $p-1$ inclusive).
The problem is to find an $x$ such that:

$h \equiv g^{x} \pmod {p}$

Let's see why that's so hard.

Let's set:

$p = 7$

$h = 6$

$g = 3$

Basically, the only approach to this is we try different values for $x$, find $g^x$, then find $g^x \pmod {p}$, and keep going until we find an $x$ such that

$h \equiv g^{x} \pmod {p}$

If you try this (and I highly encourage you to do this), you will find that $x = 3$ works. This is a pretty small example, but, turn up the numbers, and it becomes a pretty formidable problem.

Let's look a bit more deeply into this matter, why is this so hard?

Well, because we don't have any way find out an answer without brute-force.

Okay, why don't we have another way?

Because it deals with $\mod p$, for which we don't have a ton of tools to deal with.

Exactly! Primes often occur in such problems, because we don't have a complete picture of how primes operate, because as far as we understand it today, they don't form some kind of equation that we can plug numbers in and get (all) primes out.

Alright, now, how are we going to turn this into a key?

There's something called the Diffie Hellman key-exchange system, which is outlined as follows (with Alice and Bob trying to communicate):

1. There's a publicly listed prime number that is decided, in our example, it will be 19 ($p = 19$), and $g=2$, which is also publicly declared.
2. Alice picks a secret number (say, 6), and computes:
    
    $2^6 \equiv 64 \equiv 7 \pmod 19$
    
    and, sends this result off to Bob, and we call this result $A$.

3. Bob does the same (but, with his secret number, which is 13),
    $2^13 \equiv 3 \pmod 19$
    
    We'll call this $B$.

4. Now, Alice takes the number she received ($B$), and raises it to her secret number (call it $a$):
    
    $B^a \equiv 3^6 \equiv 7 \pmod 19$

5. Bob does similarly:
  
    $A^b \equiv 7^13 \equiv 7 \pmod 19$ 


As you can see, at the end, both of them share one key, despite having started from completely different keys, and since this is based on the DLP, it is incredibly hard to solve, even with a supercomputer for (very) large values for $p$ (200+ digits).

So, what does any of this have to do with groups?

Well, I'm glad you asked.

Groups (and abstract algebra in general), lets us generalize.

Using the definitions of cyclic groups that define only the properties we need, we can say this (quoted verbatim from Wikipedia):

  1. Alice and Bob agree on a finite cyclic group G and a generating element g in G. (This is usually done long before the rest of the protocol; g is assumed to be known by all attackers.) We will write the group G multiplicatively.
  2. Alice picks a random natural number a and sends ga to Bob.
  3. Bob picks a random natural number b and sends gb to Alice.
  4. Alice computes (gb)a.
  5. Bob computes (ga)b.

This immediately makes the system a LOT more powerful, because now, instead of dealing with just numbers, we can deal with ANY cyclic group we want, they can even be chess moves, if you can somehow wrangle them into a cyclic group.

Why is that helpful?

Well, if you're using natural numbers to do your bidding (since they form a cyclic group), you're still not fully secure, and there are other cyclic groups thought to be "more secure" (because of various properties of the groups themselves), and a popular such cyclic group are the points on an elliptic curve.

To explain *why* this actually happens, requires a great deal more knowledge, but, the idea of using groups to specify how the system works is profound; you can swap in any system you'd like to make a "cyclic group"


Hopefully you enjoyed that :)

All feedback is greatly appreciated!

If you have any questions (or feedback), please drop a comment below, and I'll do my best at answering them!