You've probably heard about it before from all those crazy functional programming people, but, you don't *really* know what it means... I'm here to fix that!
Consider a procedure/method written in an imperative language.
In such a procedure, the program can do anything it wants!
Which means it could multiply two numbers, crash your car, and then save those numbers to a database.
The things that it does that affect the outside world are called "side effects".
In a purely functional programming language, side effects don't exist.
But, in "almost" purely functional programming languages (in which you can actually do something practical stuff like writing programs that do things), the things that have "side effects" are confined to a couple of functions that interact with the outside, but, the rest of the functions (called pure functions) are entirely separated from the outside world, which means that they can be executed anywhere can we can expect the same results.
Consider math for second.
Mathematics is a purely functional language, because there's no way it can affect the "outside world" (atleast, mainstream math can't).
Say you define a function called $f$:
$f(x) = x^2$
Now, what do you get when you compute $f(2)$?
$f(2) = 4$
Okay, so, what's so important about that?
That simple statement means that no matter what the rest of the world is in, if you give the function $f$ a value of $2$, you can bet the farm that it will return $4$.
The same holds true for pure functions in functional programming languages!
And, that's what referential transparency means.
Basically, we're saying that if we plug in some value $m$ into some function $f$, we can always expect to get the same value back ($f(m)$), regardless of the state of the rest of the stuff we're dealing with.
Why's that useful?
First of all, that's a heck of a lot an improvement for compilers/interpreters.
If you're finding the value of $f(2)$ 40 million times, it will (usually) only find it once (this also has to do with how parse trees are replaced when parsing functional programming languages, but, the basic idea is here).
This also lifts the idea of code running sequentially, because it can be run however it would run the fastest.
Referential transparency makes code a lot easier to reason about and debug, since you KNOW that something will return a certain value when it receives a certain set of values as input.
There you go, you should hopefully understand referential transparency.
Consider a procedure/method written in an imperative language.
In such a procedure, the program can do anything it wants!
Which means it could multiply two numbers, crash your car, and then save those numbers to a database.
The things that it does that affect the outside world are called "side effects".
In a purely functional programming language, side effects don't exist.
But, in "almost" purely functional programming languages (in which you can actually do something practical stuff like writing programs that do things), the things that have "side effects" are confined to a couple of functions that interact with the outside, but, the rest of the functions (called pure functions) are entirely separated from the outside world, which means that they can be executed anywhere can we can expect the same results.
Consider math for second.
Mathematics is a purely functional language, because there's no way it can affect the "outside world" (atleast, mainstream math can't).
Say you define a function called $f$:
$f(x) = x^2$
Now, what do you get when you compute $f(2)$?
$f(2) = 4$
Okay, so, what's so important about that?
That simple statement means that no matter what the rest of the world is in, if you give the function $f$ a value of $2$, you can bet the farm that it will return $4$.
The same holds true for pure functions in functional programming languages!
And, that's what referential transparency means.
Basically, we're saying that if we plug in some value $m$ into some function $f$, we can always expect to get the same value back ($f(m)$), regardless of the state of the rest of the stuff we're dealing with.
Why's that useful?
First of all, that's a heck of a lot an improvement for compilers/interpreters.
If you're finding the value of $f(2)$ 40 million times, it will (usually) only find it once (this also has to do with how parse trees are replaced when parsing functional programming languages, but, the basic idea is here).
This also lifts the idea of code running sequentially, because it can be run however it would run the fastest.
Referential transparency makes code a lot easier to reason about and debug, since you KNOW that something will return a certain value when it receives a certain set of values as input.
There you go, you should hopefully understand referential transparency.
No comments:
Post a Comment