Good article. Funnily enough the throw away line "I don't see parentheses anymore". Is my greatest deterrent with lisp. It's not the parens persay, it's the fact that I'm used to reading up to down and left to right. Lisp without something like the clojure macro ->, means that I am reading from right to left, bottom to top - from inside out.
If i programmed enough in lisp I think my brain would adjust to this, but it's almost like I can't full appreciate the language because it reads in the "wrong order".
> It's not the parens persay, it's the fact that I'm used to reading up to down and left to right. Lisp without something like the clojure macro ->, means that I am reading from right to left, bottom to top - from inside out.
I’m not certain how true that really is. This:
foo(bar(x), quux(y), z);
looks pretty much identical to:
(foo (bar x) (quux y) z)
And of course if you want to assign them all to variables:
can you imagine saying something like
> The fradlis language encourages your average reader
to think of essays as syntax [instead of content].
and thinking it reflects well on the language
A reciprocating saw[0] is a great tool to have. It can be used to manipulate drywall, cut holes in various material, and generally allow "freehand cutting."
But it is not the right tool for making measured, repeatable, cuts. It is not the right tool for making perfect right-angle cuts, such as what is needed for framing walls.
In other words, use the right tool for the job.
If a problem is not best expressed with an AST mindset, LISP might not be the right tool for that job. But this is a statement about the job, not about the tool.
There is some value in pairing them together in that if something is missing, you know what. Like where is the error here?
(let (a b c d e) ...)
we can't tell at a glance which variable is missing its initializer.
Another aspect to this is that Common Lisp allows a variable binding to be expressed in three ways:
var
(var)
(var init-form)
For instance
(let (i j k (l) (m 9)) ...)
binds i, j and k to an initial value of nil, and m to 9.
Interleaved vars and initforms would make initforms mandatory. Which is not a bad thing.
Now suppose we have a form of let which evaluates only one expression (let variable-bindings expr), which is mandatory. Then there is no ambiguity; we know that the last item is the expr, and everything before that is variables. We can contemplate the following syntax:
(let a 2 b 3 (+ a b)) -> 5
This is doable with a macro. If you would prefer to write your Lisp code like this, you can have that today and never look back. (Just don't call it let; pick another name like le!)
If I have to work with your code, I will grok that instantly and not have any problems.
In the wild, I've seen a let1 macro which binds one variable:
(let1 var init-form statement1 statement2 ... statementn)
I am not a Lisp expert by any stretch, but let's clarify a few things:
1. Just for the sake of other readers, we agree that the code you quoted does not compile, right?
2. `let` is analogous to a scope in other languages (an extra set of {} in C), I like using it to keep my variables in the local scope.
3. `let` is structured much like other function calls. Here the first argument is a list of assignments, hence the first double parenthesis (you can declare without assigning,in which case the double parenthesis disappears since it's a list of variables, or `(variable value)` pairs).
4. The rest of the `let` arguments can be seen as the body of the scope, you can put any number of statements there. Usually these are function calls, so (func args) and it is parenthesis time again.
I get that the parenthesis can get confusing, especially at first. One adjusts quickly though, using proper indentation helps.
I mostly know lisp trough guix, and... SKILL, which is a proprietary derivative from Cadence, they added a few things like inline math, SI suffixes (I like that one), and... C "calling convention", which I just find weird: the compiler interprets foo(something) as (foo something). As I understand it, this just moves the opening parenthesis before the preceding word prior to evaluation, if there is no space before it.
I don't particularly like it, as that messes with my C instincts, respectively when it comes to spotting the scope. I find the syntax more convoluted with it, so harder to parse (not everything is a function, so parenthesis placement becomes arbitrary):
strictly speaking there's a more direct translation using `setq` which is more analogous to variable assignment in C/Python than the `let` binding, but `let` is idiomatic in lisps and closures in C/Python aren't really distinguished from functions.
The code is written the same way it is logically structured. `let` takes 1+ arguments: a set of symbol bindings to values, and 0 or more additional statements which can use those symbols. In the example you are replying to, `bar-x` and `quux-y` are symbols whose values are set to the result of `(bar x)` and `(quux y)`. After the binding statement, additional statements can follow. If the bindings aren't kept together in a `[]` or `()` you can't tell them apart from the code within the `let`.
You'd never actually write that, though. An empty lambda would be more concisely written as []{}, although even that is a rare case in real world code.
The tragedy of Lisp is that postfix-esque method notation just plain looks better, especially for people with the expectation of reading left-to-right.
let bar_x = x.bar()
let quux_y = y.quux()
return (bar_x, quux_y, z).foo()
Looks better is subjective, but it has its advantages both for actual autocomplete - as soon as I hit the dot key my IDE can tell me the useful operations for the obejct - and also for "mental autocomplete" - I know exactly where to look to find useful operations on the particular object because they're organized "underneath" it in the conceptual hierarchy. In Lisps (or other languages/codebases that aren't structured in a non-OOP-ish way) this is often a pain point for me, especially when I'm first trying to make my way into some code/library.
As a bit of a digression:
The ML languages, as with most things, get this (mostly) right, in that by convention types are encapsulated in modules that know how to operate on them - although I can't help but think there ought to be more than convention enforcing that, at the language level.
There is the problem that it's unclear - if you can Frobnicate a Foo and a Baz together to make a Bar, is that an operation on Foos, on Bazes, or on Bars? Or maybe you want a separate Frobnicator to do it? (Pure) OOP languages force you to make an arbitrary choice, Lisp and co. just kind of shrug, the ML languages let you take your take your pick, for better or worse.
It's not really subjective because people have had the opportunity to program in the nested 'read from the inside out' style of lisp for 50 years and almost no one does it.
I think the cost of Lisp machines was the determining factor. Had it been ported to more operating systems earlier history could be different right now.
That was 40 years ago. If people wanted to program inside out with lots of nesting then unfold it in their head, they would have done it at some point a long time ago. It just isn't how people want to work.
People don't work in postfix notation either, even though it would be more direct to parse. What people feel is clearer is much more important.
It's not just Lisp, though. The prefix syntax was the original one when the concept of records/structs were first introduced in ALGOL-like languages - i.e. you'd have something like `name(manager(employee))` or `name OF manager OF employee`. Dot-syntax was introduced shortly after and very quickly won over.
TXR Lisp has this notation, combined with Lisp parethesis placement.
Tather than obj.f(a, b). we have obj.(f a b).
1> (defstruct dog ()
(:method bark (self) (put-line "Woof!")))
#<struct-type dog>
2> (let ((d (new dog)))
d.(bark))
Woof!
t
The dot notation is more restricted than in mainstream languages, and has a strict correspondence to underlying Lisp syntax, with read-print consistency.
3> '(qref a b c (d) e f)
a.b.c.(d).e.f
Cannot have a number in there; that won't go to dot notation:
4> '(qref a b 3 (d) e f)
(qref a b 3 (d)
e f)
Chains of dot method calls work, by the way:
1> (defstruct circular ()
val
(:method next (self) self))
#<struct-type circular>
2> (new circular val 42)
#S(circular val 42)
3> *2.(next).(next).(next).(next).val
42
There must not be whitespace around the dot, though; you simply canot split this across lines. In other words:
*2.(next)
.(next) ;; nope!
.(next) ;; what did I say?
The "null safe" dot is .? The following check obj for nil; if so, they yield nil rather than trying to access the object or call a method:
And what about when `bar` takes several inputs? Postfix seems like an ugly hack that hyper-fixates on functions of a single argument to the detriment of everything else.
It's not like postfix replaces everything else. You can still do foo(bar, baz) where that makes the most sense.
However, experience shows that functions having one "special" argument that basically corresponds to grammatical subject in natural languages is such a common case that it makes sense for PLs to have syntactic sugar for it.
Look at the last line in the example, where I show a method being called on a tuple. Postfix syntax isn't limited to methods that take a single argument.
The only case where it's a bit different and took some time for me to adjust was that adding bindings adds an indent level.
(let ((a 12)
(b 14))
(do-something a)
(do-something-else b)
(setf b (do-third-thing a b)))
It's still mostly top-bottom, left to right. Clojure is quite a bit different, but it's not a property of lisps itself I'd say. I have a hard time coming up with examples usually so I'm open to examples of being wrong here.
Your example isn't a very functional code style though so I don't know that I'd consider it to be idiomatic. Generally code written in a functional style ends up indented many layers deep. Below is a quick (and quite tame) example from one of the introductory guides for Racket. My code often ends up much deeper. Consider what it would look like if one of the cond branches contained a nested cond.
I must have missed that memo. Sure it's remarkably flexible and simultaneously accommodates other approaches, but most of the code I see in the wild leans fairly heavily into a functional style. I posted a CL link in an adjacent comment.
Besides arrow-macros there's also cl-arrows, which is basically exactly the same thing, and Serapeum also has arrow macros (though the -> macro in Serapeum is for type definitions, the Clojure-style arrow macro is hence relegated to ~>).
Been programming in Lisp for a while. The parents disappear very quickly. One trick to accelerate it is to use a good editor with structural editing (e.g., paredit in Emacs or something similar). All you editing is done on balanced expressions. When you type “(“, the editor automatically inserts “)” with your cursor right in between. If you try to delete a “)”, the editor ignores you until you delete everything inside and the “(“. Basically, you start editing at the expression level, not so much at the character or even line level. You just notice the indentation/shape of the code, but you never spend time counting parentheses or trying to balance anything. Everything is balanced all the time and you just write code.
Not sure of other lisps, but clojure has piping. I was under the impression in general that composing functions is pretty standard in FP. For example the above can be written:
(-> (* 2 PI x) sin sqrt log)
Also while `comp` in clojure is right to left, it is easy to define one left to right. And if anything, it even uses less parentheses than the OOP example, O(1) vs O(n).
To me any kind of deep nesting is an issue. It goes against the idea of reducing the amount of mental context window needed to understand something.
Plus, if syntax errors can easily take several minutes to fix, because if the syntax is wrong, auto format doesn't work right, and then you have to read a wall of text to find out where the missing close paren should have been.
> Good article. Funnily enough the throw away line "I don't see parentheses anymore". Is my greatest deterrent with lisp. It's not the parens persay, it's the fact that I'm used to reading up to down and left to right.
Language shapes the way we think, and determines what we can think about.
- Benjamin Lee Whorf[0]
From the comments in the post:
Ask a C programmer to write factorial and you will likely
get something like this (excuse the underbars, they are
there because blogger doesn't format code in comments):
int factorial (int x) {
if (x == 0)
return 1;
else
return x * factorial (x - 1);
}
And the Lisp programmer will give you:
(defun factorial (x)
(if (zerop x)
1
(* x (factorial (- x 1)))))
Let's see how we can get from the LISP version to something akin to the C version.
First, let's "modernize" the LISP version by replacing parentheses with "curly braces" and add some commas and newlines just for fun:
{
defun factorial { x },
{
if { zerop x },
1 {
*,
x {
factorial {
- { x, 1 }
}
}
}
}
}
This kinda looks like a JSON object. Let's make it into one and add some assumed labels while we're at it.
Now, if we replace "defun" with the return type, replace some of the curlies with parentheses, get rid of the labels we added, use infix operator notation, and not worry about it being a valid JSON object, we get:
int
factorial ( x )
{
if ( zerop ( x ) )
1
else
x * factorial ( x - 1 )
}
Reformat this a bit, add some C keywords and statement delimiters, and Bob's your uncle.
This is the most elementary hurdle a lisp programmer will face. You do indeed become adjusted to it quite quickly. I wouldn’t let this deter you from exploring something like Clojure more deeply.
Whenever I hear someone talking about purely functional programming, no side effects, I wonder what kind of programs they are writing. Pretty much anything I've written over the last 30 years, the main purpose was to do I/O, it doesn't matter whether it's disk, network, or display. And that's where the most complications come from, these devices you are communicating with have quirks that need you need to deal with. Purely functional programming is very nice in theory, but how far can you actually get away with it?
The idea of pure functional programming is that you can really go quite far if you think of your program as a pure function f(input) -> outputs with a messy impure thing that calls f and does the necessary I/O before/after that.
Batch programs are easy to fit in this model generally. A compiler is pretty clearly a pure function f(program source code) -> list of instructions, with just a very thin layer to read/write the input/output to files.
Web servers can often fit this model well too: a web server is an f(request, database snapshot) -> (response, database update). Making that work well is going to be gnarly in the impure side of things, but it's going to be quite doable for a lot of basic CRUD servers--probably every web server I've ever written (which is a lot of tiny stuff, to be fair) could be done purely functional without much issue.
Display also can be made work: it's f(input event, state) -> (display frame, new state). Building the display frame here is something like an immediate mode GUI, where instead of mutating the state of widgets, you're building the entire widget tree from scratch each time.
In many cases, the limitations of purely functional isn't that somebody somewhere has to do I/O, but rather the impracticality of faking immutability if the state is too complicated.
I guess my point is that you actually have to write the impure code somehow and it's hard, external world has tendencies to fail, needs to be retried, coordinated with other things. You have to fake all these issues. In your web server examples, if you need to a cache layer for certain part of the data, you really can't without encoding it to the state management tooling. And this point you are writing a lot of non-functional code in order to glue it together with pure functions and maybe do some simple transformation in the middle. Is it worth it?
I have respect for OCaml, but that's mostly because it allows you to write mutable code fairly easily.
Roc codifies the world vs core split, but I'm skeptical how much of the world logic can be actually reused across multiple instances of FP applications.
There's a spectrum of FP languages with Haskell near the "pure" end where it truly becomes a pain to do things like io and Clojure at the more pragmatic end where not only is it accepted that you'll need to do non functional things but specific facilities are provided to help you do them well and in a way that can be readily brought into the functional parts of the language.
(I'm biased though as I am immersed in Clojure and have never coded in Haskell. But the creator of Clojure has gone out of his way to praise Haskell a bunch and openly admits where he looked at or borrowed ideas from it.)
> external world has tendencies to fail, needs to be retried, coordinated with other things.
This is exactly why I'm so aggressive in splitting IO from non-IO.
A pure function generally has no need to raise an exception, so if you see one, you know you need to fix your algorithm not handle the exception.
Whereas every IO action can succeed or fail, so those exceptions need to be handled, not fixed.
> You have to fake all these issues.
You've hit the nail on the head. Every programmer at some point writes code that depends on a clock, and tries to write a test for it. Those tests should not take seconds to run!
In some code bases the full time is taken.
handle <- startProcess
while handle.notDone
sleep 1000ms
check handle.result
In other code-bases, some refactoring is done, and fake clock is invented.
fakeClock <- new FakeClock(10:00am)
handle <- startProcess(fakeClock);
fakeClock.setTime(10:05am)
waitForProcess handle
Why not go even further and just pass in a time, not a clock?
let result = process(start=10:00am, stop=10:05)
Typically my colleagues are pretty accepting of doing the work to fake clocks, but don't generalise that solution to faking other things, or even skipping the fakes, and operating directly on the inputs or outputs.
Does your algorithm need to upload a file to S3? No it doesn't, it needs to produce some bytes and a url where those bytes should go. That can be done in unit-test land without any IO or even a mocking framework. Then some trivial one-liner higher up the call-chain can call your algorithm and do the real S3 upload.
I completely agree, but I still question the purpose of FP languages. Writing the S3 upload code is quite hard, if you really want to handle all possible error scenarios. Even if you use whatever library for that, you still need to know about which errors can it trigger and which need to be handle, and how to handle them. The mental work can be equal to the core function for generating the file. In any language, I'd separate these two pieces of code, but I'm not sure if I'd want to handle S3 upload logic with all the error handling in a FP language. That said, I've not used Clojure yet and that seems like a very pragmatic language, which might be actually usable even for these parts of the code.
* Encapsulation? What's the point of having it if's perfectly sealed off from the world? Just dead-code eliminate it.
* Private? It's not really private if I can Get() to it. I want access to that variable, so why hide it from myself? Private adds nothing because I can just choose not to use that variable.
* Const? A constant variable is an oxymoron. All the programs I write change variables. If I want a variable to remain the same, I just wont update it.
Of course I don't believe in any of the framings above, but it's how arguments against FP typically sound.
Anyway, the above features are small potatoes compared to the big hammer that is functional purity: you (and the compiler) will know and agree upon whether the same input will yield the same output.
Where am I using it right now?
I'm doing some record linkage - matching old transactions with new transactions, where some details may have shifted. I say "shifted", but what really happened was that upstream decided to mutate its data in-place. If they'd had an FPer on the team, they would not have mutated shared state, and I wouldn't even need to do this work. But I digress.
Now I'm trying out Dijkstra's algorithm, to most efficiently match pairs of transactions. It's a search algorithm, which tries out different alternatives, so it can never mutate things in-place - mutating inside one alternative will silently break another alternative. I'm in C#, and was pleasantly surprised that ImmutableList etc actually exist. But I wish I didn't have to be so vigilant. I really miss Haskell doing that part of my carefulness for me.
I don't want functional-flavoured programming, I want functional programming.
Back when I was more into pushing Haskell on my team (10+ years ago), I pitched the idea something like:
You get: the knowledge that your function's output will only depend on its input.
You pay: you gotta stop using those for-loops and [i]ndexes, and start using maps, folds, filters etc.
Those higher-order functions are a tough sell for programmers who only ever want to do things the way they've always done them.
But 5 years after that, in Java-land everyone was using maps, folds and filters like crazy (Or in C# land, Selects and Wheres and SelectManys etc,) with some half-thought-out bullshit reasoning like "it's functional, so it must good!"
Using map, fold etc. is not the hard part of functional programming. The hard part is managing effects (via monads, monad transformers, or effects). Trying to convert a procedural inner mutating algorithm to say Haskell is challenging.
Never used monads with Clojure (the only Lisp I've done "serious" work in). Haskell introduced them to me, but I've never done anything large with Haskell (no jobs!). Scala, however, has monads via the cats or (more recently) the ZIO library and they work just fine there.
The main problem with Monads is you're almost always the only programmer on a team who even knows what a Monad is.
One struggle I’ve had with wrapping my head around using FP and lisp like languages for a “real world” system is handling something like logging. Ideally that’s handled outside of the function that might be doing a data transformation but how do you build a lot message that outputs information about old and new values without contamination of your “pure” transducer?
You could I guess have a “before” step that iterates your data stream and logs all the before values, and then an “after” step that iterates after and logs all the after and get something like:
But doesn’t that cause you to iterate your data 2x more times than you “need” to and also split your logging into 2x as many statements (and thus 2x as much IO)
So, do you mean like you have some big array, and you want to do something like this? (Below is not a real programming language.)
for i in 0 to arr.len() {
new_val = f(arr[i]);
log("Changing {arr[i]} to {new_val}.\n");
arr[i] = new_val;
}
I haven't used Haskell in a long time, but here's a kind of pure way you might do it in that language, which I got after tinkering in the GHCi REPL for a bit. In Haskell, since you want to separate IO from pure logic as much as possible, functions that would do logging return instead a tuple of the log to print at the end, and the pure value. But because that's annoying and would require rewriting a lot of code manipulating tuples, there's a monad called the Writer monad which does it for you, and you extract it at the end with the `runWriter` function, which gives you back the tuple after you're done doing the computation you want to log.
You shouldn't use Text or String as the log type, because using the Writer involves appending a lot of strings, which is really inefficient. You should use a Text Builder, because it's efficient to append Builder types together, and because they become Text at the end, which is the string type you're supposed to use for Unicode text in Haskell.
So, this is it:
import qualified Data.Text.Lazy as T
import qualified Data.Text.Lazy.Builder as B
import qualified Data.Text.Lazy.IO as TIO
import Control.Monad.Writer
mapWithLog :: (Traversable t, Show a, Show b) => (a -> b) -> t a -> Writer B.Builder (t b)
mapWithLog f = mapM helper
where
helper x = do
let x' = f x
tell (make x <> B.fromString " becomes " <> make x' <> B.fromString ". ")
pure x'
make x = B.fromString (show x)
theActualIOFunction list = do
let (newList, logBuilder) = runWriter (mapWithLog negate list)
let log = B.toLazyText logBuilder
TIO.putStrLn log
-- do something with the new list...
So "theActualIOFunction [1,2,3]" would print:
1 becomes -1. 2 becomes -2. 3 becomes -3.
And then it does something with the new list, which has been negated now.
Does this imply that the logging doesn't happen until all the items have been processed though? If I'm processing a list of 10M items, I have to store up 10M*${num log statements} messages until the whole thing is done?
Alternatively, the Writer can be replaced with "IO", then the messages would be printed during the processing.
The computation code becomes effectful, but the effects are visible in types and are limited by them, and effects can be implemented both with pure and impure code (e.g. using another effect).
The effect can also be abstract, making the processing code kinda pure.
In a language with unrestricted side effects you can do the same by passing a Writer object to the function. In pure languages the difference is that the object can't be changed observably. So instead its operations return a new one. Conceptually IO is the same with the object being "world", so computation of type "IO Int" is "World -> (World, Int)". Obviously, the actual IO type is opaque to prevent non-linear use of the world (or you can make the world cloneable). In an impure language you can also perform side-effects, it is similar to having a global singleton effect. A pure language doesn't have that, and requires explicit passing.
Yes, it does imply that, except since Haskell is lazy, you'll be holding onto a thunk until the IO function is evaluated, so you won't have a list of 10 million messages in memory up until you're printing, and even then, lists are lazy, too, so you won't ever have all entries of the list in memory at once, either, because list entries are also thunks, and once you're done printing it, you'll throw it away and evaluate a thunk to create the next cons cell in the list, and then you evaluate another thunk to get the item that the next cell points to and print it. Everything is implicitly interleaved.
In the case above, where I constructed a really long string, it depends on the type of string you use. I used lazy Text, which is internally a lazy list of strict chunks of text, so that won't ever have to be in memory all at once to print it, but if I had used the strict version of Text, then it would have just been a really long string that had to be evaluated and loaded into memory all at once before being printed.
Sorry, I lack a lot of context for Haskell and its terms (my experience with FP is limited largely to forays into Lisp / Clojure), but if I'm understanding right, you're saying because the collection is being lazily evaluated, the whole process up to the point of re-combining the items back into their final collection will be happening in a parallel manner, so as long as the IO is ordered to occur before that final collection, it will occur while other items are still being processed? So if the program were running and the system crashed half way through, we'd still have logs for everything that was processed up to the point it crashed (modulo anything that was inflight at the time of the crash)?
What happens if there are multiple steps with logging at each point? Say perhaps a program where we want to:
1) Read records from a file
2) Apply some transformations and log
3) Use the resulting transformations as keys to look up data from a database and log that interaction
4) Use the result from the database to transform the data further if the lookup returned a result, or drop the result otherwise (and log)
5) Write the result of the final transform to a different file
and do all of the above while reporting progress information to the user.
And to be very clear, I'm genuinely curious and looking to learn so if I'm asking too much from your personal time, or your own understanding, or the answer is "that's a task that FP just isn't well suited for" those answers are acceptable to me.
> And to be very clear, I'm genuinely curious and looking to learn so if I'm asking too much from your personal time, or your own understanding, or the answer is "that's a task that FP just isn't well suited for" those answers are acceptable to me.
No, that's okay, just be aware that I'm not an expert in Haskell and so I'm not going to be 100% sure about answering questions about Haskell's evaluation system.
IO in Haskell is also lazy, unless you use a library for it. So it delays the action of reading in a file as a string until you're actually using it, and in this case that would be when you do some lazy transformations that are also delayed until you use them, and that would be when you're writing them to a file. When you log the transformations, only then do you start actually doing the transformations on the text you read from the file, and only then do you open the file and read a chunk of text from it, like I said.
As for adding a progress bar for the user, there's a question on StackOverflow that asks exactly how to do this, since IO being lazy in Haskell is kind of unintuitive.
The answers include making your own versions of the standard library IO functions that have a progress bar, using a library that handles the progress bar part for you, and reading the file and writing the file in some predefined number of bytes so you can calculate the progress yourself.
But, like the other commenter said, you can also just do things in IO functions directly.
It's entirely up to you. You can just write Haskell with IO everywhere, and you'll basically be working in a typical modern language but with a better type system. Main is IO, after all.
> if the program were running and the system crashed half way through, we'd still have logs for everything that was processed up to the point it crashed
Design choice. This one is all IO and would export logs after every step:
forM_ entries $ \entry -> do
(result, logs) <- process entry
export logs
handle result
Remember, if you can do things, you can log things. So you're not going to encounter a situation where you were able to fire off an action, but could not log it 'because purity'.
For debugging purposes, there's Debug.Trace, which does IO and subverts the type system to do so.
But with Haskell, I tend to do less debugging anyway, and more time getting the types right to with; when there's a function that doesn't work but still type checks, I feed it different inputs in GHCi and reread the code until I figure out why, and this is easy because almost all functions are pure and have no side effects and no reliance on global state. This is probably a sign that I don't write enough tests from the start, so I end up doing it like this.
But, I agree that doing things in a pure functional manner like this can make Haskell feel clunkier to program, even as other things feel easier and more graceful. Logging is one of those things where you wonder if the juice is worth the squeeze when it comes to doing everything in a pure functional way. Like I said, I haven't used it in a long time, and it's partly because of stuff like this, and partly because there's usually a language with a better set of libraries for the task.
> Logging is one of those things where you wonder if the juice is worth the squeeze
Yeah, because it's often not just for debugging purposes. Often you want to trace the call and its transformations through the system and systems. Including externally provided parameters like correlation ids.
Carrying the entire world with you is bulky and heavy :)
> You get: the knowledge that your function's output will only depend on its input.
> You pay: you gotta stop using those for-loops and [i]ndexes, and start using maps, folds, filters etc.
You're my type of guy. And literally none of my coworkers in the last 10 years were your type of guy. When they read this, they don't look at it in awe, but in horror. For them, functions should be allowed to have side effects, and for loops is a basic thing they don't see good reason to abandon.
Statistically most of ones coworkers will never have looked at and used to write actual code with a functional language, so it is understandable they don't get it. What makes me sad is the apparent unwillingness to learn such a thing and sticking with "everything must OOP" even in situations where it would be (with a little practice and knowledge in functional languages) simple to make it purely functional and make testing and parallelization trivial.
> Statistically most of ones coworkers will never have looked at and used to write actual code with a functional language, so it is understandable they don't get it.
I'm not against functional languages. My point was that if you want to encourage others to try it, those two are not what you want to lead with.
Those starred rhetorical questions initially looked to me like a critique of Lisp! Because that's how Lisp (particularly Common Lisp) works. All those things are softish. You can see unexported symbols even if you're not supposed to use them. There is no actual privacy unless you do something special like unintern then recreate a symbol.
> you (and the compiler) will know and agree upon whether the same input will yield the same output
What exactly does this mean? Haskell has plenty of non-deterministic functions — everything involving IO, for instance. I know that IO is non-deterministic, but how is that expressed within the language?
Not even the most fanatical functional programming zealots would claim that programs can be 100% functional. By definition, a program requires inputs and outputs, otherwise there is literally no reason to run it.
Functional programming simply says: separate the IO from the computation.
> Pretty much anything I've written over the last 30 years, the main purpose was to do I/O, it doesn't matter whether it's disk, network, or display.
Every useful program ever written takes inputs and produces outputs. The interesting part is what you actually do in the middle to transforms inputs -> outputs. And that can be entirely functional.
> Every useful program ever written takes inputs and produces outputs. The interesting part is what you actually do in the middle to transforms inputs -> outputs. And that can be entirely functional.
My work needs pseudorandom numbers throughout the big middle, for example, drawing samples from probability distributions and running randomized algorithms. That's pretty messy in a FP setting, particularly when the PRNGs get generated within deeply nested libraries.
When deeply nested libraries generate PRNGs, all that layering becomes impure and must be treated like any other stateful or IO code. In Haskell, that typically means living with a monad transformer or effect system managing the whole stack, and relatively little pure code remains.
The messiness gets worse when libraries use different conventions to manage their PRNG statefulness. This is a non-issue in most languages but a mess in a 100% pure setting.
What I don't understand about your comment is: Where do these "deeply nested libraries" come from? I use one library or even std library and pass the RNG along in function arguments or as a parameter. Why would there be "deeply nested" libraries? Is it like that in Haskell or something? Perhaps we are using different definitions of "library"?
>Not even the most fanatical functional programming zealots would claim that programs can be 100% functional. By definition, a program requires inputs and outputs, otherwise there is literally no reason to run it.
So a program it's a function that transforms the input to the output.
Each step calculates the next state and returns it. You can then compose those state calculators. If you need to save the state that’s IO and you have a bit specifically for it.
It takes a bit of discipline, but generally all state additions should be scoped to the current context. Meaning, when you enter a subcontext, it has become input and treated as holy, and when you leave to the parent context, only the result matters.
But that particular context has become inpure and decried as such in the documentation, so that carefulness is increased when interacting with it.
Can you please elaborate on this point? I read it as this web page (https://wiki.c2.com/?SeparateIoFromCalculation) describes, but I fail to see why it is a functional programming concept.
> but I fail to see why it is a functional programming concept.
"Functional programming" means that you primarily use functions (not C functions, but mathematical pure functions) to solve your problems.
This means you won't do IO in your computation because you can't do that. It also means you won't modify data, because you can't do that either. Also you might have access to first class functions, and can pass them around as values.
If you do procedural programming in C++ but your functions don't do IO or modify (not local) values, then congrats, you're doing functional programming.
Thanks. I now see why it makes sense to me. I work in DE so in most of our cases we do streaming (IO) without any transformation (computation), and then we do transformation in a total different pipeline. We never transform anything we consumed, always keep the original copy, even if it's bad.
> I fail to see why it is a functional programming concept.
Excellent! You will encounter 0 friction in using an FP then.
To the extent that programmers find friction using Haskell, it's usually because their computations unintentionally update the state of the world, and the compiler tells them off for it.
Think about this: if a function calls another function that produces a side effect, both functions become impure (non-functional). Simply separating them isn't enough. That's the difference when thinking of it in functional terms
Normally what functional programmers will do is pull their state and side effects up as high as they can so that most of their program is functional
Having functions which do nothing but computation is core functional programming. I/O should be delegated to the edges of your program, where it is necessary.
> The interesting part is what you actually do in the middle to transforms inputs -> outputs.
Can you actually name something? The only thing I can come up with is working with interesting algorithms or datastructures, but that kind of fundamental work is very rare in my experience. Even if you do, it's quite often a very small part of the entire project.
A whole web app. The IO are generally user facing network connections (request and response), IPC and RPC (databases, other services), and files interaction. Anything else is logic. An FP programs is a collection of pipes, and IO are the endpoints. With FP the blob of data passes cleanly from one section to another while in imperative, some of it sticks. In OOP, there’s a lot of blob, that flings stuff at each other and in the process create more blobs.
- Interacting with other unspecified systems through IPC, RPC or whatever (databases mainly)
The shit in between, calculating a derivative or setting up a fancy data structure of some kind or something, is interesting but how much of that do we actually do as programmers? I'm not being obtuse - intentionally anyway - I'm actually curious what interesting things functional programmers do because I'm not seeing much of it.
Edit: my point is, you say "Anything else is logic." to which I respond "What's left?"
> calculating a derivative or setting up a fancy data structure of some kind or something, is interesting but how much of that do we actually do as programmers?
A LOT, depending on the domain. There are many R&D and HPC labs throughout the US in which programmers work directly with specialists in the hard sciences. A significant percentage of their work is akin to "calculating a derivative".
Yes! In most projects, those requirements are stretched across tecnicalities like IOs. But you can pull them back to the core of your project. It takes effort, but the end result is a pleasure to work with. It can be done with FP, OOP, LP,…
> Even if you do, it's quite often a very small part of the entire project.
So your projects are only moving bits from one place to another? I've literally never seen that in 20 years of programming professionally. Even network systems that are seen as "dumb pipes" need to parse and interpret packet headers, apply validation rules, maintain BGP routing tables, add their own headers etc.
Surely the program calculates something, otherwise why would you need to run the program at all if the output is just a copy of the input?
Yes and I notice you still did not provide an interesting example. Surely parsing packets is not an interesting example of functional programming's powers?
What interesting things do you do as a programmer, really?
> parse and interpret packet headers, apply validation rules, maintain BGP routing tables, add their own headers etc.
That's a few more than zero. I don't do network programming, that was just an example to show how even the quintessential IO-heavy application requires non-trivial calculations internally.
Fair enough. It's just that in my experience the "cool bits" are quickly done and then we get bogged down in endless layers of inter-systems communication (HTTP, RPC, file systems, caches). I often see FP people saying stuff like "it's not 100% pure, of course there are some isolated side-effects" and I'm thinking.. my brother, I live inside side-effects. The days I can have even a few pure functions are few and far between. I'm honestly curious what percentage of your code bases can be this pure.
But of course this heavily depends on the domain you are working in. Some people work in simulation or physics or whatever and that's where the interesting bits begin. (Even then I'm thinking "programming" is not the interesting bit, it's the physics)
> my brother, I live inside side-effects. The days I can have even a few pure functions are few and far between. I'm honestly curious what percentage of your code bases can be this pure.
I've never seen what you work on so there is no way I can say this with certainty, but generally people unfamiliar eith functional programming have way more code that is / or can be pure in their code base than they realize. Or put the opposite, directly as is if you were to go line by line in your code (skipping lines of comments and whitespace) and give every line a yes/no on whether is performs IO, what percentage are actually performing IO? Not are related to IO, or are preparing for or handing the results of IO, but how many lines are the actual line to write to the file or send the network packet?
Generally, it's a much smaller percentage than people are thinking because they are usually associating actual IO with things "related to" or "preparing for" or "handing results from" IO.
And then after finding that percentage to be lower than expected, it can also be made to be significantly lower by following a few functional programming design approaches.
> The days I can have even a few pure functions are few and far between. I'm honestly curious what percentage of your code bases can be this pure.
A big part of it, I'm sure, but it requires some work. Pushing the side effects to the edge requires some abstractions to not directly mess with the original mutable state.
You are, in fact designing a state diagram from something that was evolving continuously on a single dimension: time. The transition of the state diagram are the code and the node are the inputs and output of that code. Then it became clear that IOs only matters when storing and loading those nodes. Because those nodes are finite and well defined, then the non-FP code for dealing with them became simpler to write.
It's a matter of framing. Think of any of the following:
- Refreshing daily "points" in some mobile app (handling the clock running backward, network connectivity lapses, ...)
- Deciding whether to send an marketing e-mail (have you been unsubscribed, how recently did you send one, have you sent the same one, should you fail open or closed, is this person receptive to marketing, ...)
- How do you represent a person's name and transform it into the things your system needs (different name fields, capitalization rules, max characters, what it you try to put it on an envelope and it doesn't fit, ...)
- Authorization logic (it's not enough to "just use a framework" no matter your programming style; you'll still have important business logic about who can access what when and how the whole thing works together)
And so on. Everything you're doing is mapping inputs to outputs, and it's important that you at least get it kind of close to correct. Some people think functional programming helps with that.
When I see this list all I can think of is how all these things are just generic, abstract rules and have nothing to do with programming. This, of course, is my problem. I have a strange mental model of things.
I can't shake off the feeling we should be defining some clean sort of "business algebra" that can be used to describe these kind of notions in a proper closed form and can then be used to derive or generate the actual code in whatever paradigm you need. What we call code feels like a distraction.
I am wrong and strange. But thanks for the list, it's helpful and I see FP's points.
You're maybe strange (probably not, when restricted to people interested in code), but wrongness hasn't been proven yet.
I'd push back, slightly, in that you need to encode those abstract rules _somehow_, and in any modern parlance that "somehow" would be a programming language, even if it looks very different from what we're used to.
From the FP side of things, they'd tend to agree with you. The point is that these really are generic, abstract rules, and we should _just_ encode the rules and not the other state mutations and whatnot that also gets bundled in.
That implicitly assumes a certain rule representation though -- one which takes in data and outputs data. It's perfectly possible, in theory, to describe constraints instead. Looking at the example of daily scheduling in the presence of the clock running backward; you can define that in terms of inputs and outputs, or you can say that the desired result satisfies (a) never less than the wall clock, (b) never decreases, (c) is the minimal such solution. Whether that's right or not is another story (it probably isn't, by itself -- lots of mobile games have bugs like that allowing you to skip ads or payment forever), but it's an interesting avenue for exploration given that those rules can be understood completely orthogonally and are the business rules we _actually_ care about, whereas the FP, OOP, and imperative versions must be holistically analyzed to ensure they satisfy business rules which are never actually written down in code.
You can name almost anything (these are general-purpose languages, after all), but I'll just throw a couple of things out there:
1. A compiler. The actual algorithms and datastructures might not be all that interesting (or they might be if you're really interested in that sort of thing), but the kinds of transformations you're doing from stage to stage are sophisticated.
2. An analytics pipeline. If you're working in the Spark/Scala world, you're writing high-level functional code that represents the transformation of data from input to output, and the framework is compiling it into a distributed program that loads your data across a cluster of nodes, executes the necessary transformations, and assembles the results. In this case there is a ton of stateful I/O involved, all interleaved with your code, but the framework abstracts it away from you.
Thanks, especially two is very interesting. Admittedly the framework itself is the actually interesting part and that's what I meant with this work being "rare" (I mean how many people work on those kinds of frameworks fulltime? It's not zero, but..)
I think what I engaged with is the notion that most programming "has some side-effects" ("it's not 100% pure"), but much of what I see is like 95% side-effects with some cool, interesting bits stuffed in between the endless layers of communication (without which the "interesting" stuff won't be worth shit).
I feel FP is very, very cool if you got yourself isolated in one of those interesting layers but I feel that's a rare place to be.
Yeah, it's not that e.g. Haskell won't allow side effects, it's that side effects are constrained: 1) all the side-effectful operations have types that forbid you from using them outside of a side-effect context; 2) and it's a good thing they do, because Haskell's laziness means the results you would get otherwise are counterintuitive.
Other FP frameworks are far less strict about such things, and many FP features are now firmly in the mainstream. So no, I don't think this stuff is particularly rare, though Haskell/OCaml systems probably still are. There are pluses and minuses with structuring code in a pure-core-with-side-effect-shell way – FP enthusiasts tend to think the pluses outweigh the minuses.
Best, I think, to view FP not as dogma or as a class of FP-only languages, but rather as a paradigm first, a set of languages second.
It's always hard to parse if people mean functional programming when bringing up Lisp. Common Lisp certainly is anything but a functional language. Sure, you have first order functions, but you in a way have that in pretty much all programming languages (including C!).
But most functions in Common Lisp do mutate things, there is an extensive OO system and the most hideous macros like LOOP.
I certainly never felt constrained writing Common Lisp.
That said, there are pretty effective patterns for dealing with IO that allow you to stay in a mostly functional / compositional flow (dare I say monads? but that sounds way more clever than it is in practice).
> It's always hard to parse if people mean functional programming when bringing up Lisp. Common Lisp certainly is anything but a functional language. Sure, you have first order functions, but you in a way have that in pretty much all programming languages (including C!).
It's less about what the language "allows" you to do and more about how the ecosystem and libraries "encourage" you to do.
Any useful program has side-effects. IMHO the point is to isolate the part of the code that has the side-effects as much as possible, and keep the rest purely functionsl. That makes it easier to debug, test, and create good abstractions. Long term it is a very good approach.
> Pretty much anything I've written over the last 30 years, the main purpose was to do I/O, it doesn't matter whether it's disk, network, or display.
Erlang is a strictly (?) a functional language, and the reason why it was invented was to do network-y stuff in the telco space. So I'm not sure why I/O and functional programming would be opposed to each other like you imply.
He says Lisp, rather than Common Lisp. Sure, given the context he's writing in now, maybe he means Common Lisp, but Joe Marshall was a Lisp programmer before Common Lisp existed, so he may not mean Common Lisp specifically.
Somehow haskell and friends shifted the discussion around functional programming to pure vs non-pure! I am pretty sure it started with functions as first order objects as differentiator in schemes, lisps and ml family languages. Thus functional, but that's just a guess.
A function pointer is already half way there. What it lacks is lexical environment capture.
And things that are possible to do with closures never stop amazing me.
Anyways, functional programming is not about purity. It is something that came from the academia, with 2 major language families: ML-likes and Lisp-likes, each focusing on certain key features.
And purity is not even the key feature of MLs in general.
They are one of those language features that, having learned them, it's a little hard to flip my brain around into the world I knew before I learned them.
If I think hard, I can sort of remember how I used to do things before I worked almost exclusively in languages that natively support closures ("Let's see... I create a state object, and it copies or retains reference to all the relevant variables... and for convenience I put my function pointer in there too usually... But I still need rules for disposing the state when I'm done with it..." It's so much nicer when the language handles all of that bookkeeping for you and auto-generates those state constructs).
Curiously, higher-order functions, and the concept of something like a closure, dates back to the earliest days of PL design - and I'm not even talking about Lisp! Algol-60, the granddaddy of pretty much every modern mainstream programming language, already had the notion of nested functions (which closed over variables from the surrounding scopes) and the ability to pass those functions to other functions.
They weren't fully first-class because there were no function-typed variables, nor could you return a function. Even so, this already lets you do stuff like map and filter. And Algol-60 programs from that era did use those capabilities, e.g.:
PROCEDURE EULER (FCT, SUM, EPS, TIM)
VALUE EPS, TIM;
INTEGER TIM;
REAL PROCEDURE FCT;
REAL SUM, EPS;
BEGIN
INTEGER I, K, N, T;
ARRAY M [0 .. 15];
REAL MN, MP, DS;
I := N := T := 0;
M[0] := FCT(0);
SUM := M[0] / 2;
NEXTTERM:
I := I+1;
MN := FCT(1);
FOR K := 0 STEP 1 UNTIL N DO
BEGIN
MP := (MN + M[K]) / 2;
M[K] := MN;
MN := MP
END;
IF (ABS(MN) < ABS(M[N]) AND N < 15) THEN
BEGIN
DS := MN/2;
N := N+1;
M[N] := MN
END
ELSE
DS := MN;
SUM := SUM + DS;
IF ABS(DS) < EPS THEN
T := T + 1
ELSE
T := 0;
IF T < TIM THEN
GOTO NEXTTERM
END;
No, functions aren't first class in C. When you use a function in an expression it undergoes lvalue conversion and "decays" to a pointer to the function. You can only call, store, etc function pointers, not functions. Function pointers are first class. Functions are not as you can't create them at runtime.
A functional programming language is one with first class functions.
The other important bit here is garbage collection.
Local and anonymous functions that capture lexical environments really, really work much better in languages built around GCs.
Without garbage collection a trivial closure (as in javascript or lisps) suddenly needs to make a lot of decisions around referencing data that can be either on the stack or in the heap.
Yes, C++ is a great example of having to make decisions that don't have good solutions without a GC or something like. See mentions of undefined behaviour in relevant sections of the standard, i.e. when a lambda captures something with a limited lifetime.
Are you saying that Haskell doesn't have lexical environments? It very much does, just as all major languages of the ML language family do.
I wrote a recursive descent parser in Lisp for a YAML replacement language[1]. It wasn't difficult. Lisp makes it easy to write I/O, but also easy to separate logic from I/O. This made it easy for me to write unit tests without mocking.
I also wrote a toy resource scheduler at an HTTP endpoint in Haskell[2]. Writing I/O in Haskell was a learning curve but was ultimately fine. Keeping logic separate from I/O was the easy thing to do.
It's about minimizing and isolating state and side effects, not eliminating them completely
Functional core, imperative shell is a common pattern. Keep the side effects on the outside. Instead of doing side effects directly, just return a data structure that can be used to enact the side effect
As others have said, a pure program is a useless program. The only place stuff like that has in this world is as a proof assistant.
What I will add is look up how the GHC runtime works, and the STGM. You may find it extremely interesting. I didn't "get" functional programming until I found out about how exotic efficient execution of functional programs ends up being.
"Purely Functional Programming", I guess mostly Haskell/Purescript.
So this only really mean:
Purely Functional Programming by default.
In most programming languages you can write
"hello " + readLine()
And this would intermix pure function (string concatenation) and impure effect (asking the user to write some text). And this would work perfectly.
By doing so, the order of evaluation becomes essential.
With a pure functional programming (by default).
you must explicitely separate the part of your program doing I/O and the part of your program doing only pure computation. And this is enforced using a type system focusing on I/O. Thus the difference between Haskell default `IO` and OCamL that does not need it for example.
in Haskell you are forced by the type system to write something like:
do
name <- getLine
let s = "Hello " <> name <> "!"
putStrLn s
you cannot mix the `getLine` directly in the middle of the concatenation operation.
But while this is a very different style of programming, I/O are just more explicit, and they "cost" more, because writing code with I/O is not as elegant, and easy to manipulate than pure code. Thus it naturally induce a way of coding that try to really makes you conscious about the part of your program that need IO and the part that you could do with only pure function.
In practice, ... yep, you endup working in a "Specific to your application domain" Monad that looks a lot like the IO Monad, but will most often contains IO.
Another option is to use a free monad for your entire program that makes you able to write in your own domain language and control its evaluation (either using IO or another system that simulates IO but is not really IO, typically for testing purpose).
This is a good write up. Thank you for it. My experience with Haskell only comes from university and the focus there was primarily on the pure side of the code. I'll have a look at how Haskell deals with the pure/inpure split for some real-world tasks. The Lisp way of doing it just seems weird to me, too ad hoc, not really structured.
OpenSCAD is such a good school of functional programming. There is no "time" or flow of execution. Or variables, scopes and environments. You are not constructing a program, but a static model which has desired properties in space and time.
The point of functional, "sans I/O" style is to separate the definition of I/O from the rest of your logic. You're still doing I/O, but what sorts of I/O you're doing has a clear self-contained definition within your program. https://sans-io.readthedocs.io/how-to-sans-io.html
There is no reason you can't use side effects in pure functional programming. You just need to provide the appropriate description of the side effect to avoid caching and force a particular evaluation order. If you have linear types, you do it by passing around opaque tokens. I'm not entirely sure how IO works in Haskell, but I think the implementation is similar. Even C compilers use a system like that internally.
The boundary between the program and the rest of the system allows I/O of course. What FP does is "virtualize" I/O by representing it as data (thus it can be passed around). Then at some point these changes get "committed" to the outside. Representing I/O separately from how it is carried out allows a lot of things to be done, such as cancelling (ctrl+z) operations.
Everyone writes real programs that have side effects. Functional programming is no different. But the side effects happen in specific ways or places, rather than all over the place.
Most of the code in most programs is not the part that is doing the I/O. It's doing stuff on a set of values to transform them. It gets values from somewhere, does stuff using those values, and then outputs some values. The complicated part is not the transfer of the final byte sequence to whatever I/O interface they go to, the core behavior of the program is the stuff that happens before that.
There are ways to handle side effects with pure functions only (it’s kind of cheating, because the actual side effects are performed by the non-pure runtime/framework that’s abstracted away, while the pure user code just defines when to perform them and how to respond to them). It’s possible, but it gets very awkward very fast. I wouldn’t use FP for any part of the code that deals with IO.
I had the same question until I understood one key pattern of pure functional programming. Not sure it has a name but here goes.
There is world, and there is a model of the world - your program. The point of the program, and all functions, is to interact with the model. This part, data structures and all, is pure.
The world interacts with the model through an IO layer, as in haskell.
I thinks it the imperative shell, functional core. The shell provide the world, the core act on it, the the shell commit it at various intervals.
Functional React follows this pattern. The issue is when the programmer thinks the world is some kind of stable state that you can store results in. It’s not, the whole point is to be created anew and restart the whole computation flow. The escape hatches are the hooks. And each have a specific usage and pattern to follow to survive world recreation. Which why you should be careful with them as they are effectively world for subcomponents. So when you add to the world with hooks, interactions with the addition should stay at the same level
> Whenever I hear someone talking about purely functional programming, no side effects, I wonder what kind of programs they are writing
Where have you ever heard anyone talk about side-effect free programs, outside of academic exercises? The linked post certainly isn't about 100% side-effect/state free code.
Usually, people talk about minimizing side-effects as much as possible, but since we build programs to do something, sometimes connected to the real world, it's basically impossible to build a program that is both useful and 100% side-effect free, as you wouldn't be able to print anything to the screen, or communicate with other programs.
And minimizing side-effects (and minimizing state overall) have a real impact on how easy it is to reason about the program. Being really carefully about where you mutate things, leads to most of the code being very explicit about what it's doing, and code only affects data that is close to where the code itself is, compared to intertwined state mutation, where things everywhere in the codebase can affect state anywhere.
The pragmatic approach is to see that FP's key point is statelessness and use that in your code (written in more mainstream languages) when appropriate.
Had a PalmPilot taped to a modem that did our auth. Lisp made the glue code feel like play. No types barking, no ceremony—just `(lambda (x) (tinker x))`. We didn’t debug, we conversed. Swapped thoughts with the REPL like it was an old friend.
Though these are minor complaints, there is a couple things I'd like to change about a Lisp language.
One is its the implicit function calls. For example, you'll usually see calls like this: `(+ 1 2)` which translates to 1 + 2, but I would find it more clear if it was `(+(1,2))` where you have a certain explicitness to it.
It doesn't stop me from using Lisp languages (Racket is fun, and I been investigating Clojure) but it took way too long for the implicit function stuff to grok in my brain.
My other complain is how the character `'` can have overloaded meaning, though I'm not entirely sure if this is implementation dependent or not
It's not really implicit though, the first element of a list that is evaluated is always a function. So (FUN 1 2) is an explicit function call. The problem is that it doesn't look like C-like languages, not that it's not explicit.
In theory ' just means QUOTE, it should not be overloaded (although I've mostly done Common Lisp, so no idea if in other impl that changes). Can you show an example of overloaded meaning?
Still unsure, whether a naked single quote character for (quote …) is really a good idea.
Got used to it by typing out (quote …)-forms explicitly for a year. The shorthand is useful at the REPL but really painful in backquote-templates until you type it out.
CL:QUOTE is a special operator and (CL:QUOTE …) is a very important special form. Especially for returning symbols and other code from macros. (Read: Especially for producing code from templates with macros.)
Aside: Lisp macros solve the C-fopen() fclose() dance for good. It even closes the file handle on error, see WITH-OPEN-FILE. That alone is worth it. And the language designers provided the entire toolset for building stuff like this, for free.
No matter how unusual it seems, it really is worth getting used to.
There's an example I saw where `'` was used as a way to denote a symbol, but I can't find that explicit example. It wasn't SBCL, I believe it may have been Clojure. Its possible I'm misremembering.
That said, since I work in C-like languages during the day, I suppose my minor complaint has to do with ease of transition, it always takes me a minute to get acquainted to Lisp syntax and read Lisp code any time I work with it.
Its really a minor complaint and one I probably wouldn't have if I worked with a Lisp language all day.
that's correct, but its not that it denotes a symbol as a different function of ', its the same function. Turn code into data. That's all symbols are (in CL at least).
For example in a quoted list, you dont need to quote the symbols because they are already in a quoted expression!
'(hello hola)
' really just says "do not evaluate whats next, treat it as data"
to be a bit more precise, everytime you have a name in common lisp that is already a symbol. But it will get evaluated. If its as the first item of an evaluated list it will be looked at in the function name space, if elsewhere it will be looked up at the variable name space. What quote is doing is just asking the compiler not to evaluate it and treat it as data.
The tricky bit is that it doesn’t quite denote a symbol.
Think of a variable name like `x` usually referring to a value. Example in C89:
int x = 5 ;
printf("%i\n", x) ;
The variable is called `x` and happens to have the value of integer 5. In case you know the term, this is an "rvalue" as opposed to an "lvalue".
In C-land (and in the compiler) the name of this variable is the string "x". In C-land this is often called the identifier of this variable.
In Python you also have variables and identifiers. Example Python REPL (bash also has this):
>>> x = 5
>>> x
5
In Common Lisp they are called symbols instead of identifiers. Think of Python 3's object.__dict__("x") .
Lisp symbols (a.k.a. identifiers a.k.a. variable names) are more powerful and more important than in C89 or Python, because there are source code templates. The most important use-case for source code templates are lisp macros (as opposed to C89 #define-style macros). This is also where backquote and quasiquote enter the picture.
In Lisp you can create a variable name (a.k.a. an identifier a.k.a. a symbol) with the function INTERN (bear with me.)
(intern "x")
is a bit like adding "x" to object.__dict__ in Python.
Now for QUOTE:
Lisp exposes many parts of the compiler and interpreter (lisp-name "evaluator") that are hidden in other languages like C89 and Python.
Like with the Python example ">>>" above:
We get the string "x" from the command line. It is parsed (leave out a step for simplicity) and the interpreter it told to look up variable "x", gets the value 5 and prints that.
(QUOTE …) means that the interpreter is supposed to give you back the piece of code you gave it instead of interpreting it. So (QUOTE x) or 'x — note the dangling single quote — returns or prints the variable name of the variable named "x".
Better example:
(+ 1 2)
Evaluates to number 3.
(quote (+ 1 2))
and
'(+ 1 2)
both evaluate to the internal source code of "add 1 and 2" one step short of actually adding them.
In source code templates you sometimes provide code that has to be evaluated multiple times, like the iconic increment "i++" has to be evaluated multiple times in many C89 loops. This is where QUOTE is actually useful. (Ignored a boatload of detail for the sake of understandability.)
First time I saw (+ 1 2), I thought it was a typo. Spent an hour trying to “fix” it into (1 + 2). My professor let me. Then he pointed at the REPL and said, “That’s not math—it’s music.” Never forgot that. The '? That’s the silent note.
R works exactly as you describe. You can type `+`(1, 2) and get 3 because in R everything that happens is a function call even if a few binary functions get special sugar so you can type 1 + 2 for them as well. The user can of course make their own of these if they wrap them in precents. For example: `%plus%` = function(a, b) { `+`(a, b)}. A few computer algebra systems languages provide even more expressivity like yacas and fricas. The later even has a type system.
R is much crazier than that because even things like assignment, curly braces, and even function definitions themselves are all function calls. It's literally the only primitive in the language, to which everything else ultimately desugars.
It's not implicit in this case, it's explicit. + is the function you're calling. And there's power in having mathematical operations be functions that you can manipulate and compose like all other functions, instead of some special case of infix implicit (to me, yeah) function calling, like 1 + 2, where it's no longer similar to other functions.
How is it implicit? The open parenthesis is before the function name rather than after, but the function isn’t called without both parentheses.
If you want to use commas, you can in Lisp dialects I’m familiar with—they’re optional because they’re treated as whitespace, but nothing is stopping you if you find them more readable!
Ancient LISP (caps deliberate) in fact had optional commas that were treated as whitespace. You can see this in the Lisp 1 Programmer's Manual (dated 1960).
This practice quickly disappeared though. (I don't have an exact time line about this.)
func(a, b) is basically the same as (func a b). You're just moving the parens around. '+' is extra 'strange' because in most languages it isn't used like other functions: imagine if you had to write +(1, 2) in every C-like.
Common Lisp supports gradual typing and will (from my experience) do a much better job of analyzing code and pointing out errors than your typical scripting language.
The most impressive thing, to me, about LISP is how the very, very small distance between the abstract syntax tree and the textual representation of the program allows for some very powerful extensions to the language with relatively little change.
Take default values for function arguments. In most languages, that's a careful consideration of the nuances of the parser, how the various symbols nest and prioritize, whether a given symbol might have been co-opted for another purpose... In LISP, it's "You know how you can have a list of symbols that are the arguments for the function? Some of those symbols can be lists now, and if they are, the first element is the symbolic argument name and the second element is a default value."
Looking for a nice, solid, well-documented library to do something is difficult for most stuff. There are some real gems out there, but usually you end up having to roll your own thing. And Lisp generally encourages rolling your own thing.
Is it about intelligence or just not being used to/having time for learning a different paradigm?
I personally have used LISP a lot. It was a little rough at first, but I got it. Despite having used a lot of languages, it felt like learning programming again.
I don't think there's something special about me that allowed me to grok it. And if that were the case, that's a horrible quality in a language. They're not supposed to be difficult to use.
I think it allows very clever stuff, which I don't think is done routinely, but that's what gets talked about. I try to write clean functional style code in other languages which in my book means separation of things that have side effects and things that don't. I don't think I'll have difficulty writing standard stuff in Lisp with that approach.
Just because it allows intricate wizardry doesn't mean it is inherently hard to get/use. I think the bigger issue would be ecosystem and shortage of talent pool.
JavaScript and Python have adopted almost every feature that differentiated Lisp from other languages. So in comparison Lisp is just more academic, esoteric, and advanced.
This is only true if you define "Lisp" as the common subset. If you look specifically at Common Lisp, neither Python nor JS come close in terms of raw power.
It's pretty rare because when it was being originally designed the main intended use case was for bragging about the cool unique niche programming language you use on your blog posts. While it's a very good language for being able to recursively get itself onto the front page, it's not so good at conformist normie use cases.
I agree with some statements OP makes but not others. Ultimately, I write in lisp because it's fun to write in Lisp due to its expressive power, ease of refactoring, and the Lisp Discord[1].
> Lisp is easier to remember,
I don't feel this way. I'm always consulting the HyperSpec or googling the function names. It's the same as any other dynamically typed language, such as Python, this way to me.
> has fewer limitations and hoops you have to jump through,
Lisp as a language has incredibly powerful features find nowhere else, but there are plenty of hoops. The CLOS truly feels like a superpower. That said, there is a huge dearth of libraries. So in that sense, there's usually lots of hoops to jump through to write an app. It's just I like jumping through them because I like writing code as a hobby. So fewer limitations, more hoops (supporting libraries I feel the need to write).
> has lower “friction” between my thoughts and my program,
Unfortunately I often think in Python or Bash because those are my day job languages, so there's often friction between how I think and what I need to write. Also AI is allegedly bad at lisp due to reduced training corpus. Copilot works, sorta.
> is easily customizable,
Yup, that's its defining feature. Easy to add to the language with macros. This can be very bad, but also very good, depending on its use. It can be very worth it both to implementer and user to add to the language as part of a library if documented well and done right, or it can make code hard to read or use. It must be used with care.
> and, frankly, more fun.
This is the true reason I actually use Lisp. I don't know why. I think it's because it's really fun to write it. There are no limitations. It's super expressive. The article goes into the substitution principle, and this makes it easy to refactor. It just feels good having a REPL that makes it easy to try new ideas and a syntax that makes refactoring a piece of cake. The Lisp Discord[1] has some of the best programmers on the planet in it, all easy to talk to, with many channels spanning a wide range of programming interests. It just feels good to do lisp.
As much as I sympathize with this post and similar ones, and as much I personally like functional thinking, LISP environments are not nearly as advanced anymore as they used to be.
Which Common LISP or Scheme environment (that runs on, say Ubuntu Linux on a typical machine from today) gets even close to the past's LISP machines, for example? And which could compete with IntelliJ IDEA or PyCharm or Microsoft Code?
But Lispworks is the only one that makes actual tree-shaken binaries, whereas SBCL just throws everything in a pot and makes it executable, right?
> good editor support: Emacs, Vim, Atom/Pulsar (SLIMA), VScode (ALIVE)
I can't speak for those other editors, but my experience with Alive has been pretty bad. I can't imagine anyone recommending it has used it. It doesn't do what slime does, and because of that, you're forced to use Emacs.
Calva for Clojure, however, is very good. I don't know why it can't be this way for CL.
> The usage experience was very ergonomic, much more ergonomic than I'm used to with my personal CL set-up. Still, the inability to inspect stack frame variables would bother me, personally.
I don't use them, but I'd recommend Pulsar's SLIMA over the VSCode plugin, because it's older and based on Slime, where ALIVE is based on LSP.
> But Lispworks is the only one that makes actual tree-shaken binaries, whereas SBCL just throws everything in a pot and makes it executable, right?
right. SBCL has core compression, so as I said a web app with dozens of dependencies and all static assets is ±35MB, that includes the compiler and debugger (that allow to connect and update a running image, whereas this wouldn't be possible with LispWorks' stripped down binary). 35MB for a non-trivial app is good IMO (and in the ballparks of a growing Go app right?)
There's also ECL, if you rely on libecl you can get very small binaries (I didn't explore this yet, see example in https://github.com/fosskers/vend)
I will check out your courses to see if I can learn something from you.
> Maybe you tried some time ago? This experience report concludes by "all in all, a great rig"
No, I've read that article before. Not being able to inspect variables in the stack frame kills a lot of the point of a REPL, or even a debugger, so I wouldn't use Alive (and most people don't). But the article represents this as a footnote for a reason.
Listen, I like Lisp. But Lisp has this weird effect where people, I think in an effort to boost the community, want to present every tool as a viable answer to their problems, no matter how unfinished or difficult to use.
In Clojure, if people ask what platforms it can reach, people will often say "anywhere." They will tell you to use Flutter through Clojure Dart (unfinished) or Babashka (lots of caveats to using this if all you want is small binaries in your app). Instead of talking about these things as tools with drawbacks, they will lump them all together in a deluge to give the impression the ecosystem is thriving. You did a similar thing in listing every editor under the sun. I doubt you have tried all of these extensively, but I could be wrong.
Same with ECL. Maybe you want the advantages of LISP with smaller binaries. But ECL is slower, supports fewer libraries, and cannot save/dump an image. You're giving up things that you don't normally have to give up in other ecosystems.
But this evangelism works against LISP. People come in from languages with very good tooling and are confused to find half the things they said would work does not.
> I will check out your courses to see if I can learn something from you.
Thanks. Don't hesitate to reach out and give feedback. If you're into web, you might find my newer (opinionated) resource helpful: https://web-apps-in-lisp.github.io/
> Listen, I like Lisp. But Lisp has this weird effect where people, I think in an effort to boost the community, want to present every tool as a viable answer to their problems, no matter how unfinished or difficult to use.
I see. On social media comments, that's kinda true: I'm so used to hear "there is no CL editor besides Emacs" (which is literally plain false and has been for years, even if you exclude VSCode), and other timeless FUD. Articles or pages on community resources (Cookbook) should be better measured.
> listing every editor under the sun.
there's an Eclipse plugin (simple), a Geany one (simple), a Sublime one (using Slynk, can be decent?), Allegro (proprietary, tried the web version without color highlighting, surprising), Portacle and plain-common-lisp are easy to install Emacs + CL + Quicklisp bundles…
some would add CLOG as a CL editor.
BTW the Intellij plugin is also based on Slime. Not a great development activity though. But a revolution for the Lisp world if you think about it. Enough to make me want mention it twice or thrice on HN.
> tried them extensively
emphasis on "extensively", so no. SLIMA for Atom/Pulsar was decent.
> ECL… slower…
true and for this I've been measured, but looking at how vend is doing would be very interesting, as it ships a very small binary, based on libecl.
Of course you're right. I've written non-trivial programs in Scheme, Emacs is a good tool for it, but certainly don't know of an environment that matches the Lisp machines.
IDEs provide such environments for the most common languages but major IDEs offer meager functionality for Lisp/Scheme (and other "obscure" languages). With a concerted effort it's possible an IDE could be configured to do more for Lisp. Thing is the amount of effort required is quite large. Since AFAIK no one has taken up the challenge, we can only conclude it's not worth the time and energy to go there.
The workflow I've used for Scheme programming is pretty simple. I can keep as many Emacs windows ("frames") open as necessary with different views of one or several modules/libraries, a browser for documentation, terminals with REPL/compiler, etc. Sort of a deconstructed IDE. Likely it does take a bit more cognitive effort to work this way, but it gets the job done.
Even putting the common lisp aside, PAIP is my favourite book about programming in general, by FAR. Norvig's programming style is so clear and expressive, the book touches on more "pedestrian" parts of programming: building tools / performance / debugging, but also walks you through a serious set of algorithms that are actually practical and that I use regularly (and they shape your thinking): search, pattern matching, to some extent unification, building interpreters and compilers, manipulating code as data.
It's also extremely fun, you go from building Eliza to a full pattern matcher to a planning agent to a prolog compiler.
Next time you see a HN post on a lisp-centric topic, click into the comments. I'll bet you a nickel that they'll be happier than most. Instead of phrases like "dumpster fire" they're using words like "joyful".
That's why I keep rekindling my learn-lisp effort. It feels like I'm just scratching the surface re: the fun that can be had.
Could a not-too trivial example like the difference between a Java sudoko solver and a lisp version with all the bells and whistles of FP such as functions as data and return values, recursion and macros be used to illustrate the benefits?
Here's one in Clojure using its core.logic library. I'd say it's pretty neat. You can do something similar in something like Prolog, but a Java implementation would look very different.
> Other general purpose languages are more popular and ultimately can do everything that Lisp can (if Church and Turing are correct).
I find these types of comments extremely odd and I very much support lisp and lisp-likes (I'm a particular fan of clojure). I can only see adding the parenthetical qualifier as a strange bias of throwing some kind of doubt into other languages which is unwarranted considering lisp at its base is usually implemented in those "other general purpose languages".
If you can implement lisp in a particular language then that particular language can de facto do (at least!) everything lisp can do.
One doesn't have to invoke Turing or Church to show all languages can do the same things.
Any code that runs on a computer (using the von Neumann architecture) boils down to just a few basic operations: Read/write data, arithmetic (add/subtract/etc.), logic (and/or/not/etc.), bit-shifting, branches and jumps. The rest is basically syntactic sugar or macros.
If your preferred programming language is a pre-compiled type-safe object oriented monster with polymorphic message passing via multi-process co-routines, or high-level interpreted purely functional archetype of computing perfection with just two reserved keywords, or even just COBOL, it's all going to break down eventually to the ops above.
Sometimes, when people say one language can't do what another does, they aren't talking about outputs. Nobody is arguing that lisp programs can do arithmetic and others can't, they're arguing that there are ergonomics to lisp you can't approach in other languages.
But even so
> it's all going to break down eventually to the ops above.
That's not true either. Different runtimes will break down into a completely different version of the above. C is going to boil down to a different set of instructions than Ruby. That would make Ruby incapable of doing some tasks, even with a JIT. And writing performance sensitive parts in C only proves the point.
"Any language can do anything" is something we tell juniors who have decision paralysis on what to learn. That's good for them, but it's not true. I'm not going to tell a client we're going to target a microcontroller with PHP, even if someone has technically done it.
I'm sure you are aware there is ultimately a chicken and egg problem here. Even given the case you presented, it doesn't invalidate the point that if it can implement lisp it must be able to do everything lisp can do. In fact given lisp's simplicity, I'd be hard pressed to call a language that couldn't implement lisp "general purpose".
"You're a very clever man, Mr. James, and that's a very good question," replied the little old lady, "but I have an answer to it. And it's this: The first turtle stands on the back of a second, far larger, turtle, who stands directly under him."
"But what does this second turtle stand on?" persisted James patiently.
To this, the little old lady crowed triumphantly,
"It's no use, Mr. James—it's turtles all the way down."
This is conflating slightly different things, though? One is that you can build a program that does the same thing. The other is that you can do the same things with the language.
There are special forms in LISP, but that is a far cry from the amount of magic that can only be done in the compiler or at runtime for many languages out there.
Yes, but sometimes doing the things Lisp can do in another language as easily and flexibly as they are done in Lisp has, as a first step, implementing Lisp in the target language.
I've never programmed in a Lisp, but I'd love to learn, it feels like one of those languages like Perl that are just good to know. I do have a job where getting better with SKILL would be useful.
> Lisp's dreaded Cambridge Polish notation is uniform and universal. I don't have to remember whether a form takes curly braces or square brackets or what the operator precedency is or some weird punctuated syntax that was invented for no good reason. It is (operator operands ...) for everything. Nothing to remember. I basically stopped noticing the parenthesis 40 years ago. I can indent how I please.
Well, that might be true for Scheme, but not for CL. There are endless forms for loops. I will never remember all of them. Or even a fraction of it. Going through Guy Steel’s CL book, I tend to think that I have a hard time remembering most of the forms, functions, and their signatures.
The word “properly” is not only working hard here, but perhaps pointing to deeper concepts.
In particular, it implies a coherent design around scope and extent. And, much more indirectly, it points to time. EVAL-WHEN has finally made a bit of a stir outside Lisp.
Does this imply that lambda expressions in Haskell and ML don't have a "coherent design around scope and extent"? This is quite a claim, to be honest....
> "properly working lambda expressions were only available in Lisp until recently."
> until -> since
I think "only since recently" is not standard English, but, even if it were, I think it would change the intended meaning to say that they were not available in Lisp until recently, the opposite of what was intended. I find it clearer to move the "only": "were available only in Lisp until recently."
I agree that "properly working lambda expressions were only available in Lisp until recently" is perfectly idiomatic, but easily misunderstood, English. I believe that the suggested fix "properly working lambda expressions were only available in Lisp since recently," which is what I was responding to, is not idiomatic. Claims about what is and isn't idiomatic aren't really subject to definitive proof either way, but it doesn't matter, because the suggester now agrees that it is not what was meant (https://news.ycombinator.com/item?id=43653723).
To be clear, the construction I’m endorsing is: "were available only in Lisp until recently", which is the construction that my editors typically proposed for similarly ambiguous deployments of "only". The ambiguity in the original placement is that it could be interpreted as only available as opposed to available and also something else. My editors always wanted it to be clear exactly what the "only" constrains.
I’m not really familiar with Lisp, but from glancing at this article it seems like all of these are really good arguments for programming in Ruby (my language of choice). Easily predictable syntax, simple substitution between variables and method calls, dynamic typing that provides ad hoc polymorphism… these are all prominent features of Ruby that are much clunkier in Python, JavaScript, or really any other commonly used language that I can think of.
Lisp is on my list of languages to learn someday, but I’ve already tried to pick up Haskell, and while I did enjoy it and have nothing but respect for the language, I ultimately abandoned it because it was just too time-consuming for me to use on a day-to-day basis. Although I definitely got something out of learning to program in a purely functional language, and in fact feel like learning Haskell made me a much better Ruby programmer.
I have about 6 years of ruby experience and if you're saying that ruby has "easily predictable syntax"...
You really should try lisp. I liked clojure a lot coming from ruby because it has a lot of nice ergonomics other lisps lack. I think youd get a lot out of it.
Ruby and LISP have a lot of overlap. To my money, LISP is a little more predictable because the polymorphic nature of the language itself is always in your face; you know that you're always staring at a list, and you have no idea without context whether that list is being evaluated at runtime, is literal, or is the body of a macro.
Ruby has all those features but (to my personal taste) makes it less obvious that things are that wilding.
(But in both languages I get to play the game "Where the hell is this function or variable defined?" way more often than I want to. There are some advantages to languages that have a strict rule about modular encapsulation and requiring almost everything into the current context... With Rails, in particular, I find it hard to understand other people's code because I never know if a given symbol was defined in another file in the codebase, defined in a library, or magicked into being by doing string transformations on a data source... In C++, I have to rely on grep a lot to find definitions, but in Ruby on Rails not even grep is likely to find me the answer I want! Common LISP is similarly flexible with a lot of common library functions that magick new symbols into existence, but the codebases I work on in LISP aren't as large as the Ruby on Rails codebases I touch, so it bites me less).
On this topic: My absolute favorite Common Lisp special operator is `(the`. Usage, `(the value-type form`, as in `(the integer (do-a-bunch-of-math))`.
At first glance, it looks like your strong-typing tool. And it can be. You can build a static analyzer that will match, as best it can, the type of the form to the value-type and throw errors if they don't match. It can also be a runtime check; the runtime is allowed to treat `the` as an assert and throw an error if there's a mismatch.
But what the spec actually says is that it's a special operator that does nothing but return the evaluation of the form if the form's value is of the right type and the behavior is undefined otherwise. So relative to, say, C++, it's a clean entrypoint for undefined behavior; it signals to a Lisp compiler or interpreter "Hey, the programmer allows you to do shenanigans here to make the code go faster. Throw away the runtime type identifier, re-represent the data in a faster bit-pattern, slam this value into functions without runtime checks, assume particular optimizations will succeed, paint it red to make it go fasta... Just, go nuts."
If you want something similar to Ruby but more functional, try Elixir. The similarities are superficial but might be enough to ease you in.
Haskell is weird. You can express well defined problems with relative ease and clarity, but performance can be kind of wonky and there's a lot more ceremony than your typical Lisp or Scheme or close relative of those. F# can give you a more lispish experience with a threshold about as low as Haskell, but comes with the close ties to an atrocious corporation and similar to Clojure it's not exactly a first class citizen in the environment.
Building stuff in Lisp-likes typically don't entail the double programming languages of the primary one and a second one to program the type system, in that way they're convenient in a way similar to Ruby. I like the parens and how they explicitly delimit portions that quite closely relate to the AST-step in compilation or whatever the interpreter sees, it helps with moving things around and molding the code quickly.
Good article. Funnily enough the throw away line "I don't see parentheses anymore". Is my greatest deterrent with lisp. It's not the parens persay, it's the fact that I'm used to reading up to down and left to right. Lisp without something like the clojure macro ->, means that I am reading from right to left, bottom to top - from inside out.
If i programmed enough in lisp I think my brain would adjust to this, but it's almost like I can't full appreciate the language because it reads in the "wrong order".
> It's not the parens persay, it's the fact that I'm used to reading up to down and left to right. Lisp without something like the clojure macro ->, means that I am reading from right to left, bottom to top - from inside out.
I’m not certain how true that really is. This:
looks pretty much identical to: And of course if you want to assign them all to variables: is pretty much the same as: FWIW, ‘per se’ comes from the Latin for ‘by itself.’One of the awesome things about LISP is it encourages a developer to think of programs as an AST[0].
One of the things that sucks about LISP is - master it and every programming language is nothing more than an AST[0].
:-D
0 - https://en.wikipedia.org/wiki/Abstract_syntax_tree
> encourages a developer to think of programs as an AST
can you imagine saying something like
> The fradlis language encourages your average reader to think of essays as syntax [instead of content].
and thinking it reflects well on the language................
But it is not the right tool for making measured, repeatable, cuts. It is not the right tool for making perfect right-angle cuts, such as what is needed for framing walls.
In other words, use the right tool for the job.
If a problem is not best expressed with an AST mindset, LISP might not be the right tool for that job. But this is a statement about the job, not about the tool.
0 - https://en.wikipedia.org/wiki/Reciprocating_saw
I think an alternative to paragraphs or some other organizational unit would be a more appropriate analogy.
The AST aspect of Lisps is absolutely an advantage. It obviates the need for the vast majority of syntax and enables very easy metaprogramming.
The lisp is harder to read, for me. The first double paren is confusing.
Why is the second set of parens necessary?The nesting makes sense to an interpreter, I'm sure, but it doesn't make sense to me.
Is each top-level set of parens a 'statement' that executes? Or does everything have to be embedded in a single list?
This is all semantics, but for my python-addled brain these are the things I get stuck on.
The let construct in Common Lisp and Scheme supports imperative programming, meaning that you have this:
If statementN is reached and evaluates to completion, then its value(s) will be the result value(s) of let.The variable-bindings occupy one argument position in let. This argument position has to be a list, so we can have multiple variables:
Within the list we have about two design choices: just interleave the variables and their initializing expressions: Or pair them together: There is some value in pairing them together in that if something is missing, you know what. Like where is the error here? we can't tell at a glance which variable is missing its initializer.Another aspect to this is that Common Lisp allows a variable binding to be expressed in three ways:
For instance binds i, j and k to an initial value of nil, and m to 9.Interleaved vars and initforms would make initforms mandatory. Which is not a bad thing.
Now suppose we have a form of let which evaluates only one expression (let variable-bindings expr), which is mandatory. Then there is no ambiguity; we know that the last item is the expr, and everything before that is variables. We can contemplate the following syntax:
This is doable with a macro. If you would prefer to write your Lisp code like this, you can have that today and never look back. (Just don't call it let; pick another name like le!)If I have to work with your code, I will grok that instantly and not have any problems.
In the wild, I've seen a let1 macro which binds one variable:
I am not a Lisp expert by any stretch, but let's clarify a few things:
1. Just for the sake of other readers, we agree that the code you quoted does not compile, right?
2. `let` is analogous to a scope in other languages (an extra set of {} in C), I like using it to keep my variables in the local scope.
3. `let` is structured much like other function calls. Here the first argument is a list of assignments, hence the first double parenthesis (you can declare without assigning,in which case the double parenthesis disappears since it's a list of variables, or `(variable value)` pairs).
4. The rest of the `let` arguments can be seen as the body of the scope, you can put any number of statements there. Usually these are function calls, so (func args) and it is parenthesis time again.
I get that the parenthesis can get confusing, especially at first. One adjusts quickly though, using proper indentation helps.
I mostly know lisp trough guix, and... SKILL, which is a proprietary derivative from Cadence, they added a few things like inline math, SI suffixes (I like that one), and... C "calling convention", which I just find weird: the compiler interprets foo(something) as (foo something). As I understand it, this just moves the opening parenthesis before the preceding word prior to evaluation, if there is no space before it.
I don't particularly like it, as that messes with my C instincts, respectively when it comes to spotting the scope. I find the syntax more convoluted with it, so harder to parse (not everything is a function, so parenthesis placement becomes arbitrary):
> Why is the second set of parens necessary?
it distinguishes the bindings from the body.
strictly speaking there's a more direct translation using `setq` which is more analogous to variable assignment in C/Python than the `let` binding, but `let` is idiomatic in lisps and closures in C/Python aren't really distinguished from functions.
You’re right!
I just wouldn’t normally write it that way.The code is written the same way it is logically structured. `let` takes 1+ arguments: a set of symbol bindings to values, and 0 or more additional statements which can use those symbols. In the example you are replying to, `bar-x` and `quux-y` are symbols whose values are set to the result of `(bar x)` and `(quux y)`. After the binding statement, additional statements can follow. If the bindings aren't kept together in a `[]` or `()` you can't tell them apart from the code within the `let`.
I prefer that to this (valid) C++ syntax:
You'd never actually write that, though. An empty lambda would be more concisely written as []{}, although even that is a rare case in real world code.
This reminds me of the terror that is the underbelly of JS.
https://jsfuck.com/
The tragedy of Lisp is that postfix-esque method notation just plain looks better, especially for people with the expectation of reading left-to-right.
Looks better is subjective, but it has its advantages both for actual autocomplete - as soon as I hit the dot key my IDE can tell me the useful operations for the obejct - and also for "mental autocomplete" - I know exactly where to look to find useful operations on the particular object because they're organized "underneath" it in the conceptual hierarchy. In Lisps (or other languages/codebases that aren't structured in a non-OOP-ish way) this is often a pain point for me, especially when I'm first trying to make my way into some code/library.
As a bit of a digression:
The ML languages, as with most things, get this (mostly) right, in that by convention types are encapsulated in modules that know how to operate on them - although I can't help but think there ought to be more than convention enforcing that, at the language level.
There is the problem that it's unclear - if you can Frobnicate a Foo and a Baz together to make a Bar, is that an operation on Foos, on Bazes, or on Bars? Or maybe you want a separate Frobnicator to do it? (Pure) OOP languages force you to make an arbitrary choice, Lisp and co. just kind of shrug, the ML languages let you take your take your pick, for better or worse.
It's not really subjective because people have had the opportunity to program in the nested 'read from the inside out' style of lisp for 50 years and almost no one does it.
I think the cost of Lisp machines was the determining factor. Had it been ported to more operating systems earlier history could be different right now.
That was 40 years ago. If people wanted to program inside out with lots of nesting then unfold it in their head, they would have done it at some point a long time ago. It just isn't how people want to work.
People don't work in postfix notation either, even though it would be more direct to parse. What people feel is clearer is much more important.
It's not just Lisp, though. The prefix syntax was the original one when the concept of records/structs were first introduced in ALGOL-like languages - i.e. you'd have something like `name(manager(employee))` or `name OF manager OF employee`. Dot-syntax was introduced shortly after and very quickly won over.
De gustibus non disputandum est, I personally find the C++/Java/Rust/... style postfix notation (foo.bar()) to be appalling.
TXR Lisp has this notation, combined with Lisp parethesis placement.
Tather than obj.f(a, b). we have obj.(f a b).
The dot notation is more restricted than in mainstream languages, and has a strict correspondence to underlying Lisp syntax, with read-print consistency. Cannot have a number in there; that won't go to dot notation: Chains of dot method calls work, by the way: There must not be whitespace around the dot, though; you simply canot split this across lines. In other words: The "null safe" dot is .? The following check obj for nil; if so, they yield nil rather than trying to access the object or call a method:And what about when `bar` takes several inputs? Postfix seems like an ugly hack that hyper-fixates on functions of a single argument to the detriment of everything else.
It's not like postfix replaces everything else. You can still do foo(bar, baz) where that makes the most sense.
However, experience shows that functions having one "special" argument that basically corresponds to grammatical subject in natural languages is such a common case that it makes sense for PLs to have syntactic sugar for it.
Look at the last line in the example, where I show a method being called on a tuple. Postfix syntax isn't limited to methods that take a single argument.
I think it really depends, in Common Lisp for example I don't think that's the case:
The only case where it's a bit different and took some time for me to adjust was that adding bindings adds an indent level. It's still mostly top-bottom, left to right. Clojure is quite a bit different, but it's not a property of lisps itself I'd say. I have a hard time coming up with examples usually so I'm open to examples of being wrong here.Your example isn't a very functional code style though so I don't know that I'd consider it to be idiomatic. Generally code written in a functional style ends up indented many layers deep. Below is a quick (and quite tame) example from one of the introductory guides for Racket. My code often ends up much deeper. Consider what it would look like if one of the cond branches contained a nested cond.
https://docs.racket-lang.org/continue/index.htmlCommon Lisp, which is what I use, is not really a functional oriented language. I'd say the above is okay in CL.
I must have missed that memo. Sure it's remarkably flexible and simultaneously accommodates other approaches, but most of the code I see in the wild leans fairly heavily into a functional style. I posted a CL link in an adjacent comment.
Here's an example that mixes in a decent amount of procedural code that I'd consider idiomatic. https://github.com/ghollisjr/cl-ana/blob/master/hdf-table/hd...
It's easy enough to add -> (and related arrow operators) to Common Lisp as macros.
https://github.com/hipeta/arrow-macros
The common complaint that Common Lisp lacks some feature is often addressed by noting how easy it is to add that feature.
Besides arrow-macros there's also cl-arrows, which is basically exactly the same thing, and Serapeum also has arrow macros (though the -> macro in Serapeum is for type definitions, the Clojure-style arrow macro is hence relegated to ~>).
Been programming in Lisp for a while. The parents disappear very quickly. One trick to accelerate it is to use a good editor with structural editing (e.g., paredit in Emacs or something similar). All you editing is done on balanced expressions. When you type “(“, the editor automatically inserts “)” with your cursor right in between. If you try to delete a “)”, the editor ignores you until you delete everything inside and the “(“. Basically, you start editing at the expression level, not so much at the character or even line level. You just notice the indentation/shape of the code, but you never spend time counting parentheses or trying to balance anything. Everything is balanced all the time and you just write code.
> reading from right to left, bottom to top - from inside out
I don't understand why you think this. Can you give an example?
Does this example help? https://github.com/ghollisjr/cl-ana/blob/master/binary-tree/...
Right.
The ergonomic problem people face is that the chaining of functions appears in other contexts, like basic OOP.
Some kids trained on banana.monkey().vine().jungle() go into a tizzy when they see (jungle (vine (monkey banana)))).
Not sure of other lisps, but clojure has piping. I was under the impression in general that composing functions is pretty standard in FP. For example the above can be written:
Also while `comp` in clojure is right to left, it is easy to define one left to right. And if anything, it even uses less parentheses than the OOP example, O(1) vs O(n).(2 * pi * x) sin sqrt log
I know Lisp since I read the little lister around 1996, and was an XEmacs user until around 2005.
The parenthesis do really disappear, just like the hieroglyphics on C influenced languages, it is a matter of habit.
At least it was for me.
To me any kind of deep nesting is an issue. It goes against the idea of reducing the amount of mental context window needed to understand something.
Plus, if syntax errors can easily take several minutes to fix, because if the syntax is wrong, auto format doesn't work right, and then you have to read a wall of text to find out where the missing close paren should have been.
> Good article. Funnily enough the throw away line "I don't see parentheses anymore". Is my greatest deterrent with lisp. It's not the parens persay, it's the fact that I'm used to reading up to down and left to right.
From the comments in the post: Let's see how we can get from the LISP version to something akin to the C version.First, let's "modernize" the LISP version by replacing parentheses with "curly braces" and add some commas and newlines just for fun:
This kinda looks like a JSON object. Let's make it into one and add some assumed labels while we're at it. Now, if we replace "defun" with the return type, replace some of the curlies with parentheses, get rid of the labels we added, use infix operator notation, and not worry about it being a valid JSON object, we get: Reformat this a bit, add some C keywords and statement delimiters, and Bob's your uncle.0 - https://www.goodreads.com/quotes/573737-language-shapes-the-...
Whorf was an idiot. It’s not worth quoting him.
> Whorf was an idiot. It’s not worth quoting him.
The citation is relevant to this topic, therefore use and attribution warranted.
It’s relevant, but it’s also wrong. It doesn’t help him make the case. But sure, make sure you attribute it to the correct idiot.
This is the most elementary hurdle a lisp programmer will face. You do indeed become adjusted to it quite quickly. I wouldn’t let this deter you from exploring something like Clojure more deeply.
per se
Whenever I hear someone talking about purely functional programming, no side effects, I wonder what kind of programs they are writing. Pretty much anything I've written over the last 30 years, the main purpose was to do I/O, it doesn't matter whether it's disk, network, or display. And that's where the most complications come from, these devices you are communicating with have quirks that need you need to deal with. Purely functional programming is very nice in theory, but how far can you actually get away with it?
The idea of pure functional programming is that you can really go quite far if you think of your program as a pure function f(input) -> outputs with a messy impure thing that calls f and does the necessary I/O before/after that.
Batch programs are easy to fit in this model generally. A compiler is pretty clearly a pure function f(program source code) -> list of instructions, with just a very thin layer to read/write the input/output to files.
Web servers can often fit this model well too: a web server is an f(request, database snapshot) -> (response, database update). Making that work well is going to be gnarly in the impure side of things, but it's going to be quite doable for a lot of basic CRUD servers--probably every web server I've ever written (which is a lot of tiny stuff, to be fair) could be done purely functional without much issue.
Display also can be made work: it's f(input event, state) -> (display frame, new state). Building the display frame here is something like an immediate mode GUI, where instead of mutating the state of widgets, you're building the entire widget tree from scratch each time.
In many cases, the limitations of purely functional isn't that somebody somewhere has to do I/O, but rather the impracticality of faking immutability if the state is too complicated.
I guess my point is that you actually have to write the impure code somehow and it's hard, external world has tendencies to fail, needs to be retried, coordinated with other things. You have to fake all these issues. In your web server examples, if you need to a cache layer for certain part of the data, you really can't without encoding it to the state management tooling. And this point you are writing a lot of non-functional code in order to glue it together with pure functions and maybe do some simple transformation in the middle. Is it worth it?
I have respect for OCaml, but that's mostly because it allows you to write mutable code fairly easily.
Roc codifies the world vs core split, but I'm skeptical how much of the world logic can be actually reused across multiple instances of FP applications.
There's a spectrum of FP languages with Haskell near the "pure" end where it truly becomes a pain to do things like io and Clojure at the more pragmatic end where not only is it accepted that you'll need to do non functional things but specific facilities are provided to help you do them well and in a way that can be readily brought into the functional parts of the language.
(I'm biased though as I am immersed in Clojure and have never coded in Haskell. But the creator of Clojure has gone out of his way to praise Haskell a bunch and openly admits where he looked at or borrowed ideas from it.)
> external world has tendencies to fail, needs to be retried, coordinated with other things.
This is exactly why I'm so aggressive in splitting IO from non-IO.
A pure function generally has no need to raise an exception, so if you see one, you know you need to fix your algorithm not handle the exception.
Whereas every IO action can succeed or fail, so those exceptions need to be handled, not fixed.
> You have to fake all these issues.
You've hit the nail on the head. Every programmer at some point writes code that depends on a clock, and tries to write a test for it. Those tests should not take seconds to run!
In some code bases the full time is taken.
In other code-bases, some refactoring is done, and fake clock is invented. Why not go even further and just pass in a time, not a clock? Typically my colleagues are pretty accepting of doing the work to fake clocks, but don't generalise that solution to faking other things, or even skipping the fakes, and operating directly on the inputs or outputs.Does your algorithm need to upload a file to S3? No it doesn't, it needs to produce some bytes and a url where those bytes should go. That can be done in unit-test land without any IO or even a mocking framework. Then some trivial one-liner higher up the call-chain can call your algorithm and do the real S3 upload.
I completely agree, but I still question the purpose of FP languages. Writing the S3 upload code is quite hard, if you really want to handle all possible error scenarios. Even if you use whatever library for that, you still need to know about which errors can it trigger and which need to be handle, and how to handle them. The mental work can be equal to the core function for generating the file. In any language, I'd separate these two pieces of code, but I'm not sure if I'd want to handle S3 upload logic with all the error handling in a FP language. That said, I've not used Clojure yet and that seems like a very pragmatic language, which might be actually usable even for these parts of the code.
Think of it like other features:
* Encapsulation? What's the point of having it if's perfectly sealed off from the world? Just dead-code eliminate it.
* Private? It's not really private if I can Get() to it. I want access to that variable, so why hide it from myself? Private adds nothing because I can just choose not to use that variable.
* Const? A constant variable is an oxymoron. All the programs I write change variables. If I want a variable to remain the same, I just wont update it.
Of course I don't believe in any of the framings above, but it's how arguments against FP typically sound.
Anyway, the above features are small potatoes compared to the big hammer that is functional purity: you (and the compiler) will know and agree upon whether the same input will yield the same output.
Where am I using it right now?
I'm doing some record linkage - matching old transactions with new transactions, where some details may have shifted. I say "shifted", but what really happened was that upstream decided to mutate its data in-place. If they'd had an FPer on the team, they would not have mutated shared state, and I wouldn't even need to do this work. But I digress.
Now I'm trying out Dijkstra's algorithm, to most efficiently match pairs of transactions. It's a search algorithm, which tries out different alternatives, so it can never mutate things in-place - mutating inside one alternative will silently break another alternative. I'm in C#, and was pleasantly surprised that ImmutableList etc actually exist. But I wish I didn't have to be so vigilant. I really miss Haskell doing that part of my carefulness for me.
>I'm in C#, and was pleasantly surprised that ImmutableList etc actually exist.
C# has introduced many functional concepts. Records, pattern matching, lambda functions, LINQ.
The only thing I am missing and will come later is discriminated unions.
Of course, F# is more fited for the job if you want a mostly functional workflow.
I don't want functional-flavoured programming, I want functional programming.
Back when I was more into pushing Haskell on my team (10+ years ago), I pitched the idea something like:
Those higher-order functions are a tough sell for programmers who only ever want to do things the way they've always done them.But 5 years after that, in Java-land everyone was using maps, folds and filters like crazy (Or in C# land, Selects and Wheres and SelectManys etc,) with some half-thought-out bullshit reasoning like "it's functional, so it must good!"
So we paid the price, but didn't get the reward.
Using map, fold etc. is not the hard part of functional programming. The hard part is managing effects (via monads, monad transformers, or effects). Trying to convert a procedural inner mutating algorithm to say Haskell is challenging.
Never used monads with Clojure (the only Lisp I've done "serious" work in). Haskell introduced them to me, but I've never done anything large with Haskell (no jobs!). Scala, however, has monads via the cats or (more recently) the ZIO library and they work just fine there.
The main problem with Monads is you're almost always the only programmer on a team who even knows what a Monad is.
> The hard part is managing effects
You can say that again!
Right now I'm working in C#, so I wished my C# managed effects, but it doesn't. It's all left to the programmer.
I don't know, stacking monads is a comparable level of pain to me.
One struggle I’ve had with wrapping my head around using FP and lisp like languages for a “real world” system is handling something like logging. Ideally that’s handled outside of the function that might be doing a data transformation but how do you build a lot message that outputs information about old and new values without contamination of your “pure” transducer?
You could I guess have a “before” step that iterates your data stream and logs all the before values, and then an “after” step that iterates after and logs all the after and get something like:
``` (->> (map log-before data) (map transform-data) (map log-after-data)) ```
But doesn’t that cause you to iterate your data 2x more times than you “need” to and also split your logging into 2x as many statements (and thus 2x as much IO)
So, do you mean like you have some big array, and you want to do something like this? (Below is not a real programming language.)
I haven't used Haskell in a long time, but here's a kind of pure way you might do it in that language, which I got after tinkering in the GHCi REPL for a bit. In Haskell, since you want to separate IO from pure logic as much as possible, functions that would do logging return instead a tuple of the log to print at the end, and the pure value. But because that's annoying and would require rewriting a lot of code manipulating tuples, there's a monad called the Writer monad which does it for you, and you extract it at the end with the `runWriter` function, which gives you back the tuple after you're done doing the computation you want to log.You shouldn't use Text or String as the log type, because using the Writer involves appending a lot of strings, which is really inefficient. You should use a Text Builder, because it's efficient to append Builder types together, and because they become Text at the end, which is the string type you're supposed to use for Unicode text in Haskell.
So, this is it:
So "theActualIOFunction [1,2,3]" would print: And then it does something with the new list, which has been negated now.Does this imply that the logging doesn't happen until all the items have been processed though? If I'm processing a list of 10M items, I have to store up 10M*${num log statements} messages until the whole thing is done?
Alternatively, the Writer can be replaced with "IO", then the messages would be printed during the processing.
The computation code becomes effectful, but the effects are visible in types and are limited by them, and effects can be implemented both with pure and impure code (e.g. using another effect).
The effect can also be abstract, making the processing code kinda pure.
In a language with unrestricted side effects you can do the same by passing a Writer object to the function. In pure languages the difference is that the object can't be changed observably. So instead its operations return a new one. Conceptually IO is the same with the object being "world", so computation of type "IO Int" is "World -> (World, Int)". Obviously, the actual IO type is opaque to prevent non-linear use of the world (or you can make the world cloneable). In an impure language you can also perform side-effects, it is similar to having a global singleton effect. A pure language doesn't have that, and requires explicit passing.
Yes, it does imply that, except since Haskell is lazy, you'll be holding onto a thunk until the IO function is evaluated, so you won't have a list of 10 million messages in memory up until you're printing, and even then, lists are lazy, too, so you won't ever have all entries of the list in memory at once, either, because list entries are also thunks, and once you're done printing it, you'll throw it away and evaluate a thunk to create the next cons cell in the list, and then you evaluate another thunk to get the item that the next cell points to and print it. Everything is implicitly interleaved.
In the case above, where I constructed a really long string, it depends on the type of string you use. I used lazy Text, which is internally a lazy list of strict chunks of text, so that won't ever have to be in memory all at once to print it, but if I had used the strict version of Text, then it would have just been a really long string that had to be evaluated and loaded into memory all at once before being printed.
Sorry, I lack a lot of context for Haskell and its terms (my experience with FP is limited largely to forays into Lisp / Clojure), but if I'm understanding right, you're saying because the collection is being lazily evaluated, the whole process up to the point of re-combining the items back into their final collection will be happening in a parallel manner, so as long as the IO is ordered to occur before that final collection, it will occur while other items are still being processed? So if the program were running and the system crashed half way through, we'd still have logs for everything that was processed up to the point it crashed (modulo anything that was inflight at the time of the crash)?
What happens if there are multiple steps with logging at each point? Say perhaps a program where we want to:
1) Read records from a file
2) Apply some transformations and log
3) Use the resulting transformations as keys to look up data from a database and log that interaction
4) Use the result from the database to transform the data further if the lookup returned a result, or drop the result otherwise (and log)
5) Write the result of the final transform to a different file
and do all of the above while reporting progress information to the user.
And to be very clear, I'm genuinely curious and looking to learn so if I'm asking too much from your personal time, or your own understanding, or the answer is "that's a task that FP just isn't well suited for" those answers are acceptable to me.
> And to be very clear, I'm genuinely curious and looking to learn so if I'm asking too much from your personal time, or your own understanding, or the answer is "that's a task that FP just isn't well suited for" those answers are acceptable to me.
No, that's okay, just be aware that I'm not an expert in Haskell and so I'm not going to be 100% sure about answering questions about Haskell's evaluation system.
IO in Haskell is also lazy, unless you use a library for it. So it delays the action of reading in a file as a string until you're actually using it, and in this case that would be when you do some lazy transformations that are also delayed until you use them, and that would be when you're writing them to a file. When you log the transformations, only then do you start actually doing the transformations on the text you read from the file, and only then do you open the file and read a chunk of text from it, like I said.
As for adding a progress bar for the user, there's a question on StackOverflow that asks exactly how to do this, since IO being lazy in Haskell is kind of unintuitive.
https://stackoverflow.com/questions/6668716/haskell-lazy-byt...
The answers include making your own versions of the standard library IO functions that have a progress bar, using a library that handles the progress bar part for you, and reading the file and writing the file in some predefined number of bytes so you can calculate the progress yourself.
But, like the other commenter said, you can also just do things in IO functions directly.
It's entirely up to you. You can just write Haskell with IO everywhere, and you'll basically be working in a typical modern language but with a better type system. Main is IO, after all.
> if the program were running and the system crashed half way through, we'd still have logs for everything that was processed up to the point it crashed
Design choice. This one is all IO and would export logs after every step:
Remember, if you can do things, you can log things. So you're not going to encounter a situation where you were able to fire off an action, but could not log it 'because purity'.Now repeat it for every function where you want to log.
Now repeat this for every location where you want to log something because you're debugging
For debugging purposes, there's Debug.Trace, which does IO and subverts the type system to do so.
But with Haskell, I tend to do less debugging anyway, and more time getting the types right to with; when there's a function that doesn't work but still type checks, I feed it different inputs in GHCi and reread the code until I figure out why, and this is easy because almost all functions are pure and have no side effects and no reliance on global state. This is probably a sign that I don't write enough tests from the start, so I end up doing it like this.
But, I agree that doing things in a pure functional manner like this can make Haskell feel clunkier to program, even as other things feel easier and more graceful. Logging is one of those things where you wonder if the juice is worth the squeeze when it comes to doing everything in a pure functional way. Like I said, I haven't used it in a long time, and it's partly because of stuff like this, and partly because there's usually a language with a better set of libraries for the task.
> Logging is one of those things where you wonder if the juice is worth the squeeze
Yeah, because it's often not just for debugging purposes. Often you want to trace the call and its transformations through the system and systems. Including externally provided parameters like correlation ids.
Carrying the entire world with you is bulky and heavy :)
> You get: the knowledge that your function's output will only depend on its input.
> You pay: you gotta stop using those for-loops and [i]ndexes, and start using maps, folds, filters etc.
You're my type of guy. And literally none of my coworkers in the last 10 years were your type of guy. When they read this, they don't look at it in awe, but in horror. For them, functions should be allowed to have side effects, and for loops is a basic thing they don't see good reason to abandon.
Statistically most of ones coworkers will never have looked at and used to write actual code with a functional language, so it is understandable they don't get it. What makes me sad is the apparent unwillingness to learn such a thing and sticking with "everything must OOP" even in situations where it would be (with a little practice and knowledge in functional languages) simple to make it purely functional and make testing and parallelization trivial.
> Statistically most of ones coworkers will never have looked at and used to write actual code with a functional language, so it is understandable they don't get it.
I'm not against functional languages. My point was that if you want to encourage others to try it, those two are not what you want to lead with.
But that's the irony of it, they did abandon the for-loops!
Maps and folds and filters are everywhere now. Why? Because 'functional is good!' ... but why is functional good?
> I don't want functional-flavoured programming, I want functional programming.
> you gotta stop using those for-loops and [i]ndexes, and start using maps, folds, filters etc.
You mean what C# literally does everywhere because Enumerable is the premier weapon of choice in the language, and has a huge amount of exactly what you want: https://learn.microsoft.com/en-us/dotnet/api/system.linq.enu...
(well, with the only exception of foreach which is for some odd reason is still a loop).
> But 5 years after that
Since .net 3.5 18 years ago: https://learn.microsoft.com/en-us/dotnet/api/system.linq.enu...
> So we paid the price, but didn't get the reward.
Who is "we", what was the price, and what was the imagined reward?
> Who is "we", what was the price, and what was the imagined reward?
Slow down and re-read.
>> You get: the knowledge that your function's output will only depend on its input.
>> You pay: you gotta stop using those for-loops and [i]ndexes, and start using maps, folds, filters etc.
Still makes no sense. Once again: who paid, what was the price, what was the expected reward?
Who paid: programmers switching to FP.
What was the price: two things:
- The programmers must stop using for-loops and [i]ndexes.
- The programmers must start using maps/folds/filters/et cetera.
What was the expected reward: the knowledge that their functions' outputs will only depend on their inputs.
In short: programmers who change their behaviour get the benefit of certainty about specific properties of their programs.
Those starred rhetorical questions initially looked to me like a critique of Lisp! Because that's how Lisp (particularly Common Lisp) works. All those things are softish. You can see unexported symbols even if you're not supposed to use them. There is no actual privacy unless you do something special like unintern then recreate a symbol.
> you (and the compiler) will know and agree upon whether the same input will yield the same output
What exactly does this mean? Haskell has plenty of non-deterministic functions — everything involving IO, for instance. I know that IO is non-deterministic, but how is that expressed within the language?
Functions which use IO are tagged as such in the type system. IO can call non-IO, but not vice-versa.
Not even the most fanatical functional programming zealots would claim that programs can be 100% functional. By definition, a program requires inputs and outputs, otherwise there is literally no reason to run it.
Functional programming simply says: separate the IO from the computation.
> Pretty much anything I've written over the last 30 years, the main purpose was to do I/O, it doesn't matter whether it's disk, network, or display.
Every useful program ever written takes inputs and produces outputs. The interesting part is what you actually do in the middle to transforms inputs -> outputs. And that can be entirely functional.
> Every useful program ever written takes inputs and produces outputs. The interesting part is what you actually do in the middle to transforms inputs -> outputs. And that can be entirely functional.
My work needs pseudorandom numbers throughout the big middle, for example, drawing samples from probability distributions and running randomized algorithms. That's pretty messy in a FP setting, particularly when the PRNGs get generated within deeply nested libraries.
At what point does this get messy?
When deeply nested libraries generate PRNGs, all that layering becomes impure and must be treated like any other stateful or IO code. In Haskell, that typically means living with a monad transformer or effect system managing the whole stack, and relatively little pure code remains.
The messiness gets worse when libraries use different conventions to manage their PRNG statefulness. This is a non-issue in most languages but a mess in a 100% pure setting.
What I don't understand about your comment is: Where do these "deeply nested libraries" come from? I use one library or even std library and pass the RNG along in function arguments or as a parameter. Why would there be "deeply nested" libraries? Is it like that in Haskell or something? Perhaps we are using different definitions of "library"?
It's not that bad if you're using a splittable RNG, is it? Any function that (transitively) depends on an RNG needs an extra input, but that's it.
>Not even the most fanatical functional programming zealots would claim that programs can be 100% functional. By definition, a program requires inputs and outputs, otherwise there is literally no reason to run it.
So a program it's a function that transforms the input to the output.
>separate the IO from the computation.
What about managing state? I think that is an important part and it's easy to mess it.
Each step calculates the next state and returns it. You can then compose those state calculators. If you need to save the state that’s IO and you have a bit specifically for it.
It takes a bit of discipline, but generally all state additions should be scoped to the current context. Meaning, when you enter a subcontext, it has become input and treated as holy, and when you leave to the parent context, only the result matters.
But that particular context has become inpure and decried as such in the documentation, so that carefulness is increased when interacting with it.
> separate the IO from the computation.
Can you please elaborate on this point? I read it as this web page (https://wiki.c2.com/?SeparateIoFromCalculation) describes, but I fail to see why it is a functional programming concept.
> but I fail to see why it is a functional programming concept.
"Functional programming" means that you primarily use functions (not C functions, but mathematical pure functions) to solve your problems.
This means you won't do IO in your computation because you can't do that. It also means you won't modify data, because you can't do that either. Also you might have access to first class functions, and can pass them around as values.
If you do procedural programming in C++ but your functions don't do IO or modify (not local) values, then congrats, you're doing functional programming.
Thanks. I now see why it makes sense to me. I work in DE so in most of our cases we do streaming (IO) without any transformation (computation), and then we do transformation in a total different pipeline. We never transform anything we consumed, always keep the original copy, even if it's bad.
> I fail to see why it is a functional programming concept.
Excellent! You will encounter 0 friction in using an FP then.
To the extent that programmers find friction using Haskell, it's usually because their computations unintentionally update the state of the world, and the compiler tells them off for it.
Think about this: if a function calls another function that produces a side effect, both functions become impure (non-functional). Simply separating them isn't enough. That's the difference when thinking of it in functional terms
Normally what functional programmers will do is pull their state and side effects up as high as they can so that most of their program is functional
Having functions which do nothing but computation is core functional programming. I/O should be delegated to the edges of your program, where it is necessary.
> The interesting part is what you actually do in the middle to transforms inputs -> outputs.
Can you actually name something? The only thing I can come up with is working with interesting algorithms or datastructures, but that kind of fundamental work is very rare in my experience. Even if you do, it's quite often a very small part of the entire project.
A whole web app. The IO are generally user facing network connections (request and response), IPC and RPC (databases, other services), and files interaction. Anything else is logic. An FP programs is a collection of pipes, and IO are the endpoints. With FP the blob of data passes cleanly from one section to another while in imperative, some of it sticks. In OOP, there’s a lot of blob, that flings stuff at each other and in the process create more blobs.
A general "web app"'s germane parts are:
- The part that receives the connection
- The part that sends back a response
- Interacting with other unspecified systems through IPC, RPC or whatever (databases mainly)
The shit in between, calculating a derivative or setting up a fancy data structure of some kind or something, is interesting but how much of that do we actually do as programmers? I'm not being obtuse - intentionally anyway - I'm actually curious what interesting things functional programmers do because I'm not seeing much of it.
Edit: my point is, you say "Anything else is logic." to which I respond "What's left?"
> calculating a derivative or setting up a fancy data structure of some kind or something, is interesting but how much of that do we actually do as programmers?
A LOT, depending on the domain. There are many R&D and HPC labs throughout the US in which programmers work directly with specialists in the hard sciences. A significant percentage of their work is akin to "calculating a derivative".
There's lots left!
"When a customer in our East Coast location makes this purchase then we apply this rate, blah blah blah".
"When someone with >X karma visits HN they get downvote buttons on comments, blah blah blah".
Yes! In most projects, those requirements are stretched across tecnicalities like IOs. But you can pull them back to the core of your project. It takes effort, but the end result is a pleasure to work with. It can be done with FP, OOP, LP,…
> Even if you do, it's quite often a very small part of the entire project.
So your projects are only moving bits from one place to another? I've literally never seen that in 20 years of programming professionally. Even network systems that are seen as "dumb pipes" need to parse and interpret packet headers, apply validation rules, maintain BGP routing tables, add their own headers etc.
Surely the program calculates something, otherwise why would you need to run the program at all if the output is just a copy of the input?
Yes and I notice you still did not provide an interesting example. Surely parsing packets is not an interesting example of functional programming's powers?
What interesting things do you do as a programmer, really?
> parse and interpret packet headers, apply validation rules, maintain BGP routing tables, add their own headers etc.
That's a few more than zero. I don't do network programming, that was just an example to show how even the quintessential IO-heavy application requires non-trivial calculations internally.
Fair enough. It's just that in my experience the "cool bits" are quickly done and then we get bogged down in endless layers of inter-systems communication (HTTP, RPC, file systems, caches). I often see FP people saying stuff like "it's not 100% pure, of course there are some isolated side-effects" and I'm thinking.. my brother, I live inside side-effects. The days I can have even a few pure functions are few and far between. I'm honestly curious what percentage of your code bases can be this pure.
But of course this heavily depends on the domain you are working in. Some people work in simulation or physics or whatever and that's where the interesting bits begin. (Even then I'm thinking "programming" is not the interesting bit, it's the physics)
> my brother, I live inside side-effects. The days I can have even a few pure functions are few and far between. I'm honestly curious what percentage of your code bases can be this pure.
I've never seen what you work on so there is no way I can say this with certainty, but generally people unfamiliar eith functional programming have way more code that is / or can be pure in their code base than they realize. Or put the opposite, directly as is if you were to go line by line in your code (skipping lines of comments and whitespace) and give every line a yes/no on whether is performs IO, what percentage are actually performing IO? Not are related to IO, or are preparing for or handing the results of IO, but how many lines are the actual line to write to the file or send the network packet?
Generally, it's a much smaller percentage than people are thinking because they are usually associating actual IO with things "related to" or "preparing for" or "handing results from" IO.
And then after finding that percentage to be lower than expected, it can also be made to be significantly lower by following a few functional programming design approaches.
> The days I can have even a few pure functions are few and far between. I'm honestly curious what percentage of your code bases can be this pure.
A big part of it, I'm sure, but it requires some work. Pushing the side effects to the edge requires some abstractions to not directly mess with the original mutable state.
You are, in fact designing a state diagram from something that was evolving continuously on a single dimension: time. The transition of the state diagram are the code and the node are the inputs and output of that code. Then it became clear that IOs only matters when storing and loading those nodes. Because those nodes are finite and well defined, then the non-FP code for dealing with them became simpler to write.
It's a matter of framing. Think of any of the following:
- Refreshing daily "points" in some mobile app (handling the clock running backward, network connectivity lapses, ...)
- Deciding whether to send an marketing e-mail (have you been unsubscribed, how recently did you send one, have you sent the same one, should you fail open or closed, is this person receptive to marketing, ...)
- How do you represent a person's name and transform it into the things your system needs (different name fields, capitalization rules, max characters, what it you try to put it on an envelope and it doesn't fit, ...)
- Authorization logic (it's not enough to "just use a framework" no matter your programming style; you'll still have important business logic about who can access what when and how the whole thing works together)
And so on. Everything you're doing is mapping inputs to outputs, and it's important that you at least get it kind of close to correct. Some people think functional programming helps with that.
When I see this list all I can think of is how all these things are just generic, abstract rules and have nothing to do with programming. This, of course, is my problem. I have a strange mental model of things.
I can't shake off the feeling we should be defining some clean sort of "business algebra" that can be used to describe these kind of notions in a proper closed form and can then be used to derive or generate the actual code in whatever paradigm you need. What we call code feels like a distraction.
I am wrong and strange. But thanks for the list, it's helpful and I see FP's points.
You're maybe strange (probably not, when restricted to people interested in code), but wrongness hasn't been proven yet.
I'd push back, slightly, in that you need to encode those abstract rules _somehow_, and in any modern parlance that "somehow" would be a programming language, even if it looks very different from what we're used to.
From the FP side of things, they'd tend to agree with you. The point is that these really are generic, abstract rules, and we should _just_ encode the rules and not the other state mutations and whatnot that also gets bundled in.
That implicitly assumes a certain rule representation though -- one which takes in data and outputs data. It's perfectly possible, in theory, to describe constraints instead. Looking at the example of daily scheduling in the presence of the clock running backward; you can define that in terms of inputs and outputs, or you can say that the desired result satisfies (a) never less than the wall clock, (b) never decreases, (c) is the minimal such solution. Whether that's right or not is another story (it probably isn't, by itself -- lots of mobile games have bugs like that allowing you to skip ads or payment forever), but it's an interesting avenue for exploration given that those rules can be understood completely orthogonally and are the business rules we _actually_ care about, whereas the FP, OOP, and imperative versions must be holistically analyzed to ensure they satisfy business rules which are never actually written down in code.
I agree.
Especially when reading Rust or C++.
That's code I would prefer to have generated for me as needed in many cases, I'm generally not that interested in manually filling in all the details.
Whatever it is, it hasn't been created yet.
You can name almost anything (these are general-purpose languages, after all), but I'll just throw a couple of things out there:
1. A compiler. The actual algorithms and datastructures might not be all that interesting (or they might be if you're really interested in that sort of thing), but the kinds of transformations you're doing from stage to stage are sophisticated.
2. An analytics pipeline. If you're working in the Spark/Scala world, you're writing high-level functional code that represents the transformation of data from input to output, and the framework is compiling it into a distributed program that loads your data across a cluster of nodes, executes the necessary transformations, and assembles the results. In this case there is a ton of stateful I/O involved, all interleaved with your code, but the framework abstracts it away from you.
Thanks, especially two is very interesting. Admittedly the framework itself is the actually interesting part and that's what I meant with this work being "rare" (I mean how many people work on those kinds of frameworks fulltime? It's not zero, but..)
I think what I engaged with is the notion that most programming "has some side-effects" ("it's not 100% pure"), but much of what I see is like 95% side-effects with some cool, interesting bits stuffed in between the endless layers of communication (without which the "interesting" stuff won't be worth shit).
I feel FP is very, very cool if you got yourself isolated in one of those interesting layers but I feel that's a rare place to be.
Yeah, it's not that e.g. Haskell won't allow side effects, it's that side effects are constrained: 1) all the side-effectful operations have types that forbid you from using them outside of a side-effect context; 2) and it's a good thing they do, because Haskell's laziness means the results you would get otherwise are counterintuitive.
Other FP frameworks are far less strict about such things, and many FP features are now firmly in the mainstream. So no, I don't think this stuff is particularly rare, though Haskell/OCaml systems probably still are. There are pluses and minuses with structuring code in a pure-core-with-side-effect-shell way – FP enthusiasts tend to think the pluses outweigh the minuses.
Best, I think, to view FP not as dogma or as a class of FP-only languages, but rather as a paradigm first, a set of languages second.
It's always hard to parse if people mean functional programming when bringing up Lisp. Common Lisp certainly is anything but a functional language. Sure, you have first order functions, but you in a way have that in pretty much all programming languages (including C!).
But most functions in Common Lisp do mutate things, there is an extensive OO system and the most hideous macros like LOOP.
I certainly never felt constrained writing Common Lisp.
That said, there are pretty effective patterns for dealing with IO that allow you to stay in a mostly functional / compositional flow (dare I say monads? but that sounds way more clever than it is in practice).
> It's always hard to parse if people mean functional programming when bringing up Lisp. Common Lisp certainly is anything but a functional language. Sure, you have first order functions, but you in a way have that in pretty much all programming languages (including C!).
It's less about what the language "allows" you to do and more about how the ecosystem and libraries "encourage" you to do.
Any useful program has side-effects. IMHO the point is to isolate the part of the code that has the side-effects as much as possible, and keep the rest purely functionsl. That makes it easier to debug, test, and create good abstractions. Long term it is a very good approach.
> Pretty much anything I've written over the last 30 years, the main purpose was to do I/O, it doesn't matter whether it's disk, network, or display.
Erlang is a strictly (?) a functional language, and the reason why it was invented was to do network-y stuff in the telco space. So I'm not sure why I/O and functional programming would be opposed to each other like you imply.
> Erlang is a strictly (?) a functional language,
First and foremost Erlang is a pragmatic programming language :)
This is discussing Common Lisp which is not even a mostly-functional language, and far from purely functional.
He says Lisp, rather than Common Lisp. Sure, given the context he's writing in now, maybe he means Common Lisp, but Joe Marshall was a Lisp programmer before Common Lisp existed, so he may not mean Common Lisp specifically.
Somehow haskell and friends shifted the discussion around functional programming to pure vs non-pure! I am pretty sure it started with functions as first order objects as differentiator in schemes, lisps and ml family languages. Thus functional, but that's just a guess.
> Somehow haskell and friends shifted the discussion around functional programming to pure vs non-pure
In direct response every other language in the mid 2010s saying, "Look, we're functional too, we can pass functions to other functions, see?"
C's had that forever:In a way this is true.
A function pointer is already half way there. What it lacks is lexical environment capture.
And things that are possible to do with closures never stop amazing me.
Anyways, functional programming is not about purity. It is something that came from the academia, with 2 major language families: ML-likes and Lisp-likes, each focusing on certain key features.
And purity is not even the key feature of MLs in general.
Closures bring me joy.
They are one of those language features that, having learned them, it's a little hard to flip my brain around into the world I knew before I learned them.
If I think hard, I can sort of remember how I used to do things before I worked almost exclusively in languages that natively support closures ("Let's see... I create a state object, and it copies or retains reference to all the relevant variables... and for convenience I put my function pointer in there too usually... But I still need rules for disposing the state when I'm done with it..." It's so much nicer when the language handles all of that bookkeeping for you and auto-generates those state constructs).
Curiously, higher-order functions, and the concept of something like a closure, dates back to the earliest days of PL design - and I'm not even talking about Lisp! Algol-60, the granddaddy of pretty much every modern mainstream programming language, already had the notion of nested functions (which closed over variables from the surrounding scopes) and the ability to pass those functions to other functions.
They weren't fully first-class because there were no function-typed variables, nor could you return a function. Even so, this already lets you do stuff like map and filter. And Algol-60 programs from that era did use those capabilities, e.g.:
No, functions aren't first class in C. When you use a function in an expression it undergoes lvalue conversion and "decays" to a pointer to the function. You can only call, store, etc function pointers, not functions. Function pointers are first class. Functions are not as you can't create them at runtime.
A functional programming language is one with first class functions.
What is the impact on the user of having first class functions vs first class function pointers?
Last I checked when you implement lambda in lisp it's also a pointer to the lambda internally.
Function pointers can’t close over variables.
The other important bit here is garbage collection.
Local and anonymous functions that capture lexical environments really, really work much better in languages built around GCs.
Without garbage collection a trivial closure (as in javascript or lisps) suddenly needs to make a lot of decisions around referencing data that can be either on the stack or in the heap.
C++ does this and the decision to make is to capture by reference or value.
Environments aren’t a thing in Haskell etc. does that mean it’s not functional?
Yes, C++ is a great example of having to make decisions that don't have good solutions without a GC or something like. See mentions of undefined behaviour in relevant sections of the standard, i.e. when a lambda captures something with a limited lifetime.
Are you saying that Haskell doesn't have lexical environments? It very much does, just as all major languages of the ML language family do.
That has nothing to do with the value/pointer distinction.
And “close over” semantics differ greatly depending on the language.
I wrote a recursive descent parser in Lisp for a YAML replacement language[1]. It wasn't difficult. Lisp makes it easy to write I/O, but also easy to separate logic from I/O. This made it easy for me to write unit tests without mocking.
I also wrote a toy resource scheduler at an HTTP endpoint in Haskell[2]. Writing I/O in Haskell was a learning curve but was ultimately fine. Keeping logic separate from I/O was the easy thing to do.
1: https://github.com/djha-skin/nrdl
2: https://github.com/djha-skin/lighthouse
It's about minimizing and isolating state and side effects, not eliminating them completely
Functional core, imperative shell is a common pattern. Keep the side effects on the outside. Instead of doing side effects directly, just return a data structure that can be used to enact the side effect
As others have said, a pure program is a useless program. The only place stuff like that has in this world is as a proof assistant.
What I will add is look up how the GHC runtime works, and the STGM. You may find it extremely interesting. I didn't "get" functional programming until I found out about how exotic efficient execution of functional programs ends up being.
"Purely Functional Programming", I guess mostly Haskell/Purescript.
So this only really mean:
Purely Functional Programming by default.
In most programming languages you can write
"hello " + readLine()
And this would intermix pure function (string concatenation) and impure effect (asking the user to write some text). And this would work perfectly.
By doing so, the order of evaluation becomes essential.
With a pure functional programming (by default).
you must explicitely separate the part of your program doing I/O and the part of your program doing only pure computation. And this is enforced using a type system focusing on I/O. Thus the difference between Haskell default `IO` and OCamL that does not need it for example.
in Haskell you are forced by the type system to write something like:
you cannot mix the `getLine` directly in the middle of the concatenation operation.But while this is a very different style of programming, I/O are just more explicit, and they "cost" more, because writing code with I/O is not as elegant, and easy to manipulate than pure code. Thus it naturally induce a way of coding that try to really makes you conscious about the part of your program that need IO and the part that you could do with only pure function.
In practice, ... yep, you endup working in a "Specific to your application domain" Monad that looks a lot like the IO Monad, but will most often contains IO.
Another option is to use a free monad for your entire program that makes you able to write in your own domain language and control its evaluation (either using IO or another system that simulates IO but is not really IO, typically for testing purpose).
This is a good write up. Thank you for it. My experience with Haskell only comes from university and the focus there was primarily on the pure side of the code. I'll have a look at how Haskell deals with the pure/inpure split for some real-world tasks. The Lisp way of doing it just seems weird to me, too ad hoc, not really structured.
OpenSCAD is such a good school of functional programming. There is no "time" or flow of execution. Or variables, scopes and environments. You are not constructing a program, but a static model which has desired properties in space and time.
The point of functional, "sans I/O" style is to separate the definition of I/O from the rest of your logic. You're still doing I/O, but what sorts of I/O you're doing has a clear self-contained definition within your program. https://sans-io.readthedocs.io/how-to-sans-io.html
There is no reason you can't use side effects in pure functional programming. You just need to provide the appropriate description of the side effect to avoid caching and force a particular evaluation order. If you have linear types, you do it by passing around opaque tokens. I'm not entirely sure how IO works in Haskell, but I think the implementation is similar. Even C compilers use a system like that internally.
The boundary between the program and the rest of the system allows I/O of course. What FP does is "virtualize" I/O by representing it as data (thus it can be passed around). Then at some point these changes get "committed" to the outside. Representing I/O separately from how it is carried out allows a lot of things to be done, such as cancelling (ctrl+z) operations.
Everyone writes real programs that have side effects. Functional programming is no different. But the side effects happen in specific ways or places, rather than all over the place.
Most of the code in most programs is not the part that is doing the I/O. It's doing stuff on a set of values to transform them. It gets values from somewhere, does stuff using those values, and then outputs some values. The complicated part is not the transfer of the final byte sequence to whatever I/O interface they go to, the core behavior of the program is the stuff that happens before that.
There are ways to handle side effects with pure functions only (it’s kind of cheating, because the actual side effects are performed by the non-pure runtime/framework that’s abstracted away, while the pure user code just defines when to perform them and how to respond to them). It’s possible, but it gets very awkward very fast. I wouldn’t use FP for any part of the code that deals with IO.
That was the question, what would you use FP for?
I had the same question until I understood one key pattern of pure functional programming. Not sure it has a name but here goes.
There is world, and there is a model of the world - your program. The point of the program, and all functions, is to interact with the model. This part, data structures and all, is pure.
The world interacts with the model through an IO layer, as in haskell.
Purity is just an enforcement of this separation.
I thinks it the imperative shell, functional core. The shell provide the world, the core act on it, the the shell commit it at various intervals.
Functional React follows this pattern. The issue is when the programmer thinks the world is some kind of stable state that you can store results in. It’s not, the whole point is to be created anew and restart the whole computation flow. The escape hatches are the hooks. And each have a specific usage and pattern to follow to survive world recreation. Which why you should be careful with them as they are effectively world for subcomponents. So when you add to the world with hooks, interactions with the addition should stay at the same level
> Whenever I hear someone talking about purely functional programming, no side effects, I wonder what kind of programs they are writing
Where have you ever heard anyone talk about side-effect free programs, outside of academic exercises? The linked post certainly isn't about 100% side-effect/state free code.
Usually, people talk about minimizing side-effects as much as possible, but since we build programs to do something, sometimes connected to the real world, it's basically impossible to build a program that is both useful and 100% side-effect free, as you wouldn't be able to print anything to the screen, or communicate with other programs.
And minimizing side-effects (and minimizing state overall) have a real impact on how easy it is to reason about the program. Being really carefully about where you mutate things, leads to most of the code being very explicit about what it's doing, and code only affects data that is close to where the code itself is, compared to intertwined state mutation, where things everywhere in the codebase can affect state anywhere.
The pragmatic approach is to see that FP's key point is statelessness and use that in your code (written in more mainstream languages) when appropriate.
I never believed any FP evangelist ever since I realized I can't even write quicksort with it *.
(* Yes, you can technically write it procedurally like a good C programmer, sure.)
Had a PalmPilot taped to a modem that did our auth. Lisp made the glue code feel like play. No types barking, no ceremony—just `(lambda (x) (tinker x))`. We didn’t debug, we conversed. Swapped thoughts with the REPL like it was an old friend.
Though these are minor complaints, there is a couple things I'd like to change about a Lisp language.
One is its the implicit function calls. For example, you'll usually see calls like this: `(+ 1 2)` which translates to 1 + 2, but I would find it more clear if it was `(+(1,2))` where you have a certain explicitness to it.
It doesn't stop me from using Lisp languages (Racket is fun, and I been investigating Clojure) but it took way too long for the implicit function stuff to grok in my brain.
My other complain is how the character `'` can have overloaded meaning, though I'm not entirely sure if this is implementation dependent or not
It's not really implicit though, the first element of a list that is evaluated is always a function. So (FUN 1 2) is an explicit function call. The problem is that it doesn't look like C-like languages, not that it's not explicit.
In theory ' just means QUOTE, it should not be overloaded (although I've mostly done Common Lisp, so no idea if in other impl that changes). Can you show an example of overloaded meaning?
Still unsure, whether a naked single quote character for (quote …) is really a good idea.
Got used to it by typing out (quote …)-forms explicitly for a year. The shorthand is useful at the REPL but really painful in backquote-templates until you type it out.
CL:QUOTE is a special operator and (CL:QUOTE …) is a very important special form. Especially for returning symbols and other code from macros. (Read: Especially for producing code from templates with macros.)
Aside: Lisp macros solve the C-fopen() fclose() dance for good. It even closes the file handle on error, see WITH-OPEN-FILE. That alone is worth it. And the language designers provided the entire toolset for building stuff like this, for free.
No matter how unusual it seems, it really is worth getting used to.
There's an example I saw where `'` was used as a way to denote a symbol, but I can't find that explicit example. It wasn't SBCL, I believe it may have been Clojure. Its possible I'm misremembering.
That said, since I work in C-like languages during the day, I suppose my minor complaint has to do with ease of transition, it always takes me a minute to get acquainted to Lisp syntax and read Lisp code any time I work with it.
Its really a minor complaint and one I probably wouldn't have if I worked with a Lisp language all day.
that's correct, but its not that it denotes a symbol as a different function of ', its the same function. Turn code into data. That's all symbols are (in CL at least).
For example in a quoted list, you dont need to quote the symbols because they are already in a quoted expression!
'(hello hola)
' really just says "do not evaluate whats next, treat it as data"
to be a bit more precise, everytime you have a name in common lisp that is already a symbol. But it will get evaluated. If its as the first item of an evaluated list it will be looked at in the function name space, if elsewhere it will be looked up at the variable name space. What quote is doing is just asking the compiler not to evaluate it and treat it as data.
Symbols are quoted representations of identifiers so it still does the same thing.
The tricky bit is that it doesn’t quite denote a symbol.
Think of a variable name like `x` usually referring to a value. Example in C89:
The variable is called `x` and happens to have the value of integer 5. In case you know the term, this is an "rvalue" as opposed to an "lvalue".In C-land (and in the compiler) the name of this variable is the string "x". In C-land this is often called the identifier of this variable.
In Python you also have variables and identifiers. Example Python REPL (bash also has this):
In Common Lisp they are called symbols instead of identifiers. Think of Python 3's object.__dict__("x") .Lisp symbols (a.k.a. identifiers a.k.a. variable names) are more powerful and more important than in C89 or Python, because there are source code templates. The most important use-case for source code templates are lisp macros (as opposed to C89 #define-style macros). This is also where backquote and quasiquote enter the picture.
In Lisp you can create a variable name (a.k.a. an identifier a.k.a. a symbol) with the function INTERN (bear with me.)
is a bit like adding "x" to object.__dict__ in Python.Now for QUOTE:
Lisp exposes many parts of the compiler and interpreter (lisp-name "evaluator") that are hidden in other languages like C89 and Python.
Like with the Python example ">>>" above:
We get the string "x" from the command line. It is parsed (leave out a step for simplicity) and the interpreter it told to look up variable "x", gets the value 5 and prints that.
(QUOTE …) means that the interpreter is supposed to give you back the piece of code you gave it instead of interpreting it. So (QUOTE x) or 'x — note the dangling single quote — returns or prints the variable name of the variable named "x".
Better example:
Evaluates to number 3. and both evaluate to the internal source code of "add 1 and 2" one step short of actually adding them.In source code templates you sometimes provide code that has to be evaluated multiple times, like the iconic increment "i++" has to be evaluated multiple times in many C89 loops. This is where QUOTE is actually useful. (Ignored a boatload of detail for the sake of understandability.)
First person to ask for more parenthesis in Lisp.
First time I saw (+ 1 2), I thought it was a typo. Spent an hour trying to “fix” it into (1 + 2). My professor let me. Then he pointed at the REPL and said, “That’s not math—it’s music.” Never forgot that. The '? That’s the silent note.
It's due to Polish notation[0] as far as I understand it. This is how that notation for mathematics works.
I suppose my suggestion would break those semantics.
[0]: https://en.wikipedia.org/wiki/Polish_notation
Aye, Polish notation sure. But what he gave me wasn’t a lecture, it was a spell.
Syntax mattered less than rhythm. Parens weren’t fences, they were measures. The REPL didn’t care if I understood. It played anyway.
Beautiful. I wish I had more professors who expressed concepts poetically back when I was in school. That's the kind of line that sticks in your head.
R works exactly as you describe. You can type `+`(1, 2) and get 3 because in R everything that happens is a function call even if a few binary functions get special sugar so you can type 1 + 2 for them as well. The user can of course make their own of these if they wrap them in precents. For example: `%plus%` = function(a, b) { `+`(a, b)}. A few computer algebra systems languages provide even more expressivity like yacas and fricas. The later even has a type system.
Similar in Nim as well.
In Standard ML, you can do either 1+2 or op+(1, 2).
R is much crazier than that because even things like assignment, curly braces, and even function definitions themselves are all function calls. It's literally the only primitive in the language, to which everything else ultimately desugars.
It's not implicit in this case, it's explicit. + is the function you're calling. And there's power in having mathematical operations be functions that you can manipulate and compose like all other functions, instead of some special case of infix implicit (to me, yeah) function calling, like 1 + 2, where it's no longer similar to other functions.
How is it implicit? The open parenthesis is before the function name rather than after, but the function isn’t called without both parentheses.
If you want to use commas, you can in Lisp dialects I’m familiar with—they’re optional because they’re treated as whitespace, but nothing is stopping you if you find them more readable!
, is typically “unquote.” Clojure is the only “mainstream” Lisp that allows , as whitespace. Has meaning in CL and Scheme.
Ancient LISP (caps deliberate) in fact had optional commas that were treated as whitespace. You can see this in the Lisp 1 Programmer's Manual (dated 1960).
This practice quickly disappeared though. (I don't have an exact time line about this.)
My mistake!
That’s what I get for not double checking… well… basically anything I think during my first cup of coffee.
func(a, b) is basically the same as (func a b). You're just moving the parens around. '+' is extra 'strange' because in most languages it isn't used like other functions: imagine if you had to write +(1, 2) in every C-like.
Surely you mean (+ (, 1 2))
;)
[flagged]
Wat’s the sound of meeting something older than you thought possible.
Lisp listened. Modem sang. PalmPilot sweated. We talked, not debugged.
> No types barking
No thanks
The reason I switched from Scheme to Common Lisp was because I could say...
And then do a (describe 'foo) in the REPL to get Lisp to tell me that it wants an integer from 0 to 100.Common Lisp supports gradual typing and will (from my experience) do a much better job of analyzing code and pointing out errors than your typical scripting language.
The most impressive thing, to me, about LISP is how the very, very small distance between the abstract syntax tree and the textual representation of the program allows for some very powerful extensions to the language with relatively little change.
Take default values for function arguments. In most languages, that's a careful consideration of the nuances of the parser, how the various symbols nest and prioritize, whether a given symbol might have been co-opted for another purpose... In LISP, it's "You know how you can have a list of symbols that are the arguments for the function? Some of those symbols can be lists now, and if they are, the first element is the symbolic argument name and the second element is a default value."
Always read from experienced developers praising lisps, but why is it so rare in production applications?
Looking for a nice, solid, well-documented library to do something is difficult for most stuff. There are some real gems out there, but usually you end up having to roll your own thing. And Lisp generally encourages rolling your own thing.
People smart enough to read and write it are rare.
Is it about intelligence or just not being used to/having time for learning a different paradigm?
I personally have used LISP a lot. It was a little rough at first, but I got it. Despite having used a lot of languages, it felt like learning programming again.
I don't think there's something special about me that allowed me to grok it. And if that were the case, that's a horrible quality in a language. They're not supposed to be difficult to use.
I think it allows very clever stuff, which I don't think is done routinely, but that's what gets talked about. I try to write clean functional style code in other languages which in my book means separation of things that have side effects and things that don't. I don't think I'll have difficulty writing standard stuff in Lisp with that approach.
Just because it allows intricate wizardry doesn't mean it is inherently hard to get/use. I think the bigger issue would be ecosystem and shortage of talent pool.
I have hardly any issues reading and writing CL and I am so stupid even bags of bricks take pity on me. Intelligence is not a factor here.
The only software I use that I know runs a lisp is Hacker News.
When I was in high school I learned AutoCAD and I remember that back then it was scripted in LISP. I'm not sure if that is still true.
JavaScript and Python have adopted almost every feature that differentiated Lisp from other languages. So in comparison Lisp is just more academic, esoteric, and advanced.
This is only true if you define "Lisp" as the common subset. If you look specifically at Common Lisp, neither Python nor JS come close in terms of raw power.
It's pretty rare because when it was being originally designed the main intended use case was for bragging about the cool unique niche programming language you use on your blog posts. While it's a very good language for being able to recursively get itself onto the front page, it's not so good at conformist normie use cases.
In my case, my boss won't let me.
Terry Pratchett's quote in one of his books (in fact I think this is a running gag, and appeared in multiple books):
That's what I think about five closing parentheses too... But tbh I am also jealous, because I can't program in lisp at allI agree with some statements OP makes but not others. Ultimately, I write in lisp because it's fun to write in Lisp due to its expressive power, ease of refactoring, and the Lisp Discord[1].
> Lisp is easier to remember,
I don't feel this way. I'm always consulting the HyperSpec or googling the function names. It's the same as any other dynamically typed language, such as Python, this way to me.
> has fewer limitations and hoops you have to jump through,
Lisp as a language has incredibly powerful features find nowhere else, but there are plenty of hoops. The CLOS truly feels like a superpower. That said, there is a huge dearth of libraries. So in that sense, there's usually lots of hoops to jump through to write an app. It's just I like jumping through them because I like writing code as a hobby. So fewer limitations, more hoops (supporting libraries I feel the need to write).
> has lower “friction” between my thoughts and my program,
Unfortunately I often think in Python or Bash because those are my day job languages, so there's often friction between how I think and what I need to write. Also AI is allegedly bad at lisp due to reduced training corpus. Copilot works, sorta.
> is easily customizable,
Yup, that's its defining feature. Easy to add to the language with macros. This can be very bad, but also very good, depending on its use. It can be very worth it both to implementer and user to add to the language as part of a library if documented well and done right, or it can make code hard to read or use. It must be used with care.
> and, frankly, more fun.
This is the true reason I actually use Lisp. I don't know why. I think it's because it's really fun to write it. There are no limitations. It's super expressive. The article goes into the substitution principle, and this makes it easy to refactor. It just feels good having a REPL that makes it easy to try new ideas and a syntax that makes refactoring a piece of cake. The Lisp Discord[1] has some of the best programmers on the planet in it, all easy to talk to, with many channels spanning a wide range of programming interests. It just feels good to do lisp.
1: https://discord.gg/HsxkkvQ
As much as I sympathize with this post and similar ones, and as much I personally like functional thinking, LISP environments are not nearly as advanced anymore as they used to be.
Which Common LISP or Scheme environment (that runs on, say Ubuntu Linux on a typical machine from today) gets even close to the past's LISP machines, for example? And which could compete with IntelliJ IDEA or PyCharm or Microsoft Code?
https://ssw.jku.at/General/Staff/PF/genera-screenshots.html
Common Lisp can compete with Python no problem, that's what matters to me. You get:
- truly interactive development (never wait for something to restart, resume bugs from any stack frame after you fixed them),
- self-contained binaries (easy deployment, my web app with all the dependencies, HTML and CSS is ±35MB)
- useful compile-time warnings and errors, a keystroke away, for Haskell levels see Coalton (so better than Python),
- fast programs compiled to machine code,
- no GIL
- connect to, inspect or update running programs (Slime/Swank),
- good debugging tools (interactive debugger, trace, stepper, watcher (on some impls)…)
- stable language and libraries (although the implementations improve),
- CLOS and MOP,
- etc
- good editor support: Emacs, Vim, Atom/Pulsar (SLIMA), VScode (ALIVE), Jetbrains (SLT), Jupyter kernel, Lem, and more: https://lispcookbook.github.io/cl-cookbook/editor-support.ht...
What we might not get:
- advanced refactoring tools -also because we need them less, thanks to the REPL and language features (macros, multiple return values…).
---
For a lisp machine of yesterday running on Ubuntu or the browser: https://interlisp.org/
> self-contained binaries
But Lispworks is the only one that makes actual tree-shaken binaries, whereas SBCL just throws everything in a pot and makes it executable, right?
> good editor support: Emacs, Vim, Atom/Pulsar (SLIMA), VScode (ALIVE)
I can't speak for those other editors, but my experience with Alive has been pretty bad. I can't imagine anyone recommending it has used it. It doesn't do what slime does, and because of that, you're forced to use Emacs.
Calva for Clojure, however, is very good. I don't know why it can't be this way for CL.
Maybe you tried some time ago? This experience report concludes by "all in all, a great rig": https://blog.djhaskin.com/blog/experience-report-using-vs-co...
> The usage experience was very ergonomic, much more ergonomic than I'm used to with my personal CL set-up. Still, the inability to inspect stack frame variables would bother me, personally.
I don't use them, but I'd recommend Pulsar's SLIMA over the VSCode plugin, because it's older and based on Slime, where ALIVE is based on LSP.
> But Lispworks is the only one that makes actual tree-shaken binaries, whereas SBCL just throws everything in a pot and makes it executable, right?
right. SBCL has core compression, so as I said a web app with dozens of dependencies and all static assets is ±35MB, that includes the compiler and debugger (that allow to connect and update a running image, whereas this wouldn't be possible with LispWorks' stripped down binary). 35MB for a non-trivial app is good IMO (and in the ballparks of a growing Go app right?)
There's also ECL, if you rely on libecl you can get very small binaries (I didn't explore this yet, see example in https://github.com/fosskers/vend)
I will check out your courses to see if I can learn something from you.
> Maybe you tried some time ago? This experience report concludes by "all in all, a great rig"
No, I've read that article before. Not being able to inspect variables in the stack frame kills a lot of the point of a REPL, or even a debugger, so I wouldn't use Alive (and most people don't). But the article represents this as a footnote for a reason.
Listen, I like Lisp. But Lisp has this weird effect where people, I think in an effort to boost the community, want to present every tool as a viable answer to their problems, no matter how unfinished or difficult to use.
In Clojure, if people ask what platforms it can reach, people will often say "anywhere." They will tell you to use Flutter through Clojure Dart (unfinished) or Babashka (lots of caveats to using this if all you want is small binaries in your app). Instead of talking about these things as tools with drawbacks, they will lump them all together in a deluge to give the impression the ecosystem is thriving. You did a similar thing in listing every editor under the sun. I doubt you have tried all of these extensively, but I could be wrong.
Same with ECL. Maybe you want the advantages of LISP with smaller binaries. But ECL is slower, supports fewer libraries, and cannot save/dump an image. You're giving up things that you don't normally have to give up in other ecosystems.
But this evangelism works against LISP. People come in from languages with very good tooling and are confused to find half the things they said would work does not.
> I will check out your courses to see if I can learn something from you.
Thanks. Don't hesitate to reach out and give feedback. If you're into web, you might find my newer (opinionated) resource helpful: https://web-apps-in-lisp.github.io/
> Listen, I like Lisp. But Lisp has this weird effect where people, I think in an effort to boost the community, want to present every tool as a viable answer to their problems, no matter how unfinished or difficult to use.
I see. On social media comments, that's kinda true: I'm so used to hear "there is no CL editor besides Emacs" (which is literally plain false and has been for years, even if you exclude VSCode), and other timeless FUD. Articles or pages on community resources (Cookbook) should be better measured.
> listing every editor under the sun.
there's an Eclipse plugin (simple), a Geany one (simple), a Sublime one (using Slynk, can be decent?), Allegro (proprietary, tried the web version without color highlighting, surprising), Portacle and plain-common-lisp are easy to install Emacs + CL + Quicklisp bundles…
some would add CLOG as a CL editor.
BTW the Intellij plugin is also based on Slime. Not a great development activity though. But a revolution for the Lisp world if you think about it. Enough to make me want mention it twice or thrice on HN.
> tried them extensively
emphasis on "extensively", so no. SLIMA for Atom/Pulsar was decent.
> ECL… slower…
true and for this I've been measured, but looking at how vend is doing would be very interesting, as it ships a very small binary, based on libecl.
Of course you're right. I've written non-trivial programs in Scheme, Emacs is a good tool for it, but certainly don't know of an environment that matches the Lisp machines.
IDEs provide such environments for the most common languages but major IDEs offer meager functionality for Lisp/Scheme (and other "obscure" languages). With a concerted effort it's possible an IDE could be configured to do more for Lisp. Thing is the amount of effort required is quite large. Since AFAIK no one has taken up the challenge, we can only conclude it's not worth the time and energy to go there.
The workflow I've used for Scheme programming is pretty simple. I can keep as many Emacs windows ("frames") open as necessary with different views of one or several modules/libraries, a browser for documentation, terminals with REPL/compiler, etc. Sort of a deconstructed IDE. Likely it does take a bit more cognitive effort to work this way, but it gets the job done.
This is the first article I’ve ever read that made me want to go learn Lisp.
Watch Rich Hickey's early Clojure videos and be blown away.
Got any specific suggestion?
"Simple Made Easy" is pretty popular, there is a transcription with slides:
https://github.com/matthiasn/talk-transcripts/blob/master/Hi...
I like "Clojure, Made Simple" even more.
https://www.youtube.com/watch?v=028LZLUB24s
Someone helpfully pulled out this chunk, which is a good illustration of why data is better than functions, a key driver of Clojure's design.
https://www.youtube.com/watch?v=aSEQfqNYNAc
It's tangentially relevant, but I've enjoyed this one, about hammock driven programming.
https://www.youtube.com/watch?v=f84n5oFoZBc
Lispworks has a free editition with lots of examples. Look into PAIP from Peter Norvig.
Even putting the common lisp aside, PAIP is my favourite book about programming in general, by FAR. Norvig's programming style is so clear and expressive, the book touches on more "pedestrian" parts of programming: building tools / performance / debugging, but also walks you through a serious set of algorithms that are actually practical and that I use regularly (and they shape your thinking): search, pattern matching, to some extent unification, building interpreters and compilers, manipulating code as data.
It's also extremely fun, you go from building Eliza to a full pattern matcher to a planning agent to a prolog compiler.
Paul Graham's On Lisp is also a powerful argument to try the language, even if some of the stuff it presents is totally bonkers. :-D
https://www.paulgraham.com/onlisp.html
what are the bonkers parts? (just curious)
The use of anaphoric exprssions is not something I think I could recommend. But it's fun to read about.
Next time you see a HN post on a lisp-centric topic, click into the comments. I'll bet you a nickel that they'll be happier than most. Instead of phrases like "dumpster fire" they're using words like "joyful".
That's why I keep rekindling my learn-lisp effort. It feels like I'm just scratching the surface re: the fun that can be had.
Never been happier since building an Erp system in pure lisp and postgresql.
Could a not-too trivial example like the difference between a Java sudoko solver and a lisp version with all the bells and whistles of FP such as functions as data and return values, recursion and macros be used to illustrate the benefits?
Here's one in Clojure using its core.logic library. I'd say it's pretty neat. You can do something similar in something like Prolog, but a Java implementation would look very different.
https://github.com/sideshowcoder/core-logic-sudoku-solver/bl...
> Other general purpose languages are more popular and ultimately can do everything that Lisp can (if Church and Turing are correct).
I find these types of comments extremely odd and I very much support lisp and lisp-likes (I'm a particular fan of clojure). I can only see adding the parenthetical qualifier as a strange bias of throwing some kind of doubt into other languages which is unwarranted considering lisp at its base is usually implemented in those "other general purpose languages".
If you can implement lisp in a particular language then that particular language can de facto do (at least!) everything lisp can do.
One doesn't have to invoke Turing or Church to show all languages can do the same things.
Any code that runs on a computer (using the von Neumann architecture) boils down to just a few basic operations: Read/write data, arithmetic (add/subtract/etc.), logic (and/or/not/etc.), bit-shifting, branches and jumps. The rest is basically syntactic sugar or macros.
If your preferred programming language is a pre-compiled type-safe object oriented monster with polymorphic message passing via multi-process co-routines, or high-level interpreted purely functional archetype of computing perfection with just two reserved keywords, or even just COBOL, it's all going to break down eventually to the ops above.
Sometimes, when people say one language can't do what another does, they aren't talking about outputs. Nobody is arguing that lisp programs can do arithmetic and others can't, they're arguing that there are ergonomics to lisp you can't approach in other languages.
But even so
> it's all going to break down eventually to the ops above.
That's not true either. Different runtimes will break down into a completely different version of the above. C is going to boil down to a different set of instructions than Ruby. That would make Ruby incapable of doing some tasks, even with a JIT. And writing performance sensitive parts in C only proves the point.
"Any language can do anything" is something we tell juniors who have decision paralysis on what to learn. That's good for them, but it's not true. I'm not going to tell a client we're going to target a microcontroller with PHP, even if someone has technically done it.
You can trivially devise a language that doesn't, though? Let's say I have a language that can return 0 and only 0. It cannot reproduce lisp.
Isn’t this just a cheeky joke? I.e. “if Einstein is right about this whole theory of relatively thing”
Common Lisp at its base is usually written in Common Lisp.
I'm sure you are aware there is ultimately a chicken and egg problem here. Even given the case you presented, it doesn't invalidate the point that if it can implement lisp it must be able to do everything lisp can do. In fact given lisp's simplicity, I'd be hard pressed to call a language that couldn't implement lisp "general purpose".
"You're a very clever man, Mr. James, and that's a very good question," replied the little old lady, "but I have an answer to it. And it's this: The first turtle stands on the back of a second, far larger, turtle, who stands directly under him."
"But what does this second turtle stand on?" persisted James patiently.
To this, the little old lady crowed triumphantly,
"It's no use, Mr. James—it's turtles all the way down."
> I'm sure you are aware there is ultimately a chicken and egg problem here.
You should learn more about compilers. There is a really cool idea waiting for you.
There are several Lisp implementations (including fully-fledged operating systems) which are implemented in Lisp top to bottom.
This is conflating slightly different things, though? One is that you can build a program that does the same thing. The other is that you can do the same things with the language.
There are special forms in LISP, but that is a far cry from the amount of magic that can only be done in the compiler or at runtime for many languages out there.
Brainfuck is also Turing complete but that isn’t an argument that it’s a good replacement for LISP or any other language.
That has a name: Turing tarpit.
Yes, but sometimes doing the things Lisp can do in another language as easily and flexibly as they are done in Lisp has, as a first step, implementing Lisp in the target language.
For a famous example, see Clasp: https://github.com/clasp-developers/clasp
I believe ‘twas a joke.
I've never programmed in a Lisp, but I'd love to learn, it feels like one of those languages like Perl that are just good to know. I do have a job where getting better with SKILL would be useful.
"Why I Program in Lisp"
because you don't have money to waste on doctors?
Surely one of the the main reason to program in Lisp (and Haskell, and ???) is so you can write blog posts about doing so :-)
(I do really like Lisp).
> Lisp's dreaded Cambridge Polish notation is uniform and universal. I don't have to remember whether a form takes curly braces or square brackets or what the operator precedency is or some weird punctuated syntax that was invented for no good reason. It is (operator operands ...) for everything. Nothing to remember. I basically stopped noticing the parenthesis 40 years ago. I can indent how I please.
Well, that might be true for Scheme, but not for CL. There are endless forms for loops. I will never remember all of them. Or even a fraction of it. Going through Guy Steel’s CL book, I tend to think that I have a hard time remembering most of the forms, functions, and their signatures.
[dead]
[dead]
[flagged]
Programming is about coordination between tasks. Prove me wrong.
It doesn't have to be wrong to be irrelevant.
Many things can be viewed as coordination problems. All of life can be viewed as being about coordination between tasks.
But I want to engage in good faith and assume you have some way of making this productive. What angle are you going for?
Hm so your point is life is programming ?
Isn't that obvious? :)
I do wonder what happens if one of the tasks to be coordinated is "programming"?
that's metaprogramming :D
No. It's deep philosophy on software design.
>It's less of a big deal these days, but properly working lambda expressions were only available in Lisp until recently.
I think Haskell and ML had lambda expressions since like 1990.
Recent, compared to Lisp
Number of programmers that are in workforce that started before Standard ML (1983) is tiny and this argument would be relevant only to them.
The author of the referenced post is one of them, though.
The word “properly” is not only working hard here, but perhaps pointing to deeper concepts.
In particular, it implies a coherent design around scope and extent. And, much more indirectly, it points to time. EVAL-WHEN has finally made a bit of a stir outside Lisp.
Does this imply that lambda expressions in Haskell and ML don't have a "coherent design around scope and extent"? This is quite a claim, to be honest....
No, it doesn’t imply that. There are plenty of more popular languages that don’t though.
"properly working lambda expressions were only available in Lisp until recently."
until -> since
> "properly working lambda expressions were only available in Lisp until recently."
> until -> since
I think "only since recently" is not standard English, but, even if it were, I think it would change the intended meaning to say that they were not available in Lisp until recently, the opposite of what was intended. I find it clearer to move the "only": "were available only in Lisp until recently."
Personally, I'd probably move "until recently" to the front: "Until recently, properly working lambda expressions were only available in Lisp."
Sorry, I did indeed misunderstand the sentence due to its phrasing. I cannot edit my comment for some reason.
I like your fix the most.
It's perfectly good and idiomatic English, but it's an ambiguous formation and your suggested edit does clarify it.
I agree that "properly working lambda expressions were only available in Lisp until recently" is perfectly idiomatic, but easily misunderstood, English. I believe that the suggested fix "properly working lambda expressions were only available in Lisp since recently," which is what I was responding to, is not idiomatic. Claims about what is and isn't idiomatic aren't really subject to definitive proof either way, but it doesn't matter, because the suggester now agrees that it is not what was meant (https://news.ycombinator.com/item?id=43653723).
To be clear, the construction I’m endorsing is: "were available only in Lisp until recently", which is the construction that my editors typically proposed for similarly ambiguous deployments of "only". The ambiguity in the original placement is that it could be interpreted as only available as opposed to available and also something else. My editors always wanted it to be clear exactly what the "only" constrains.
"properly working lambda expressions were available only in lisp until recently."
I’m not really familiar with Lisp, but from glancing at this article it seems like all of these are really good arguments for programming in Ruby (my language of choice). Easily predictable syntax, simple substitution between variables and method calls, dynamic typing that provides ad hoc polymorphism… these are all prominent features of Ruby that are much clunkier in Python, JavaScript, or really any other commonly used language that I can think of.
Lisp is on my list of languages to learn someday, but I’ve already tried to pick up Haskell, and while I did enjoy it and have nothing but respect for the language, I ultimately abandoned it because it was just too time-consuming for me to use on a day-to-day basis. Although I definitely got something out of learning to program in a purely functional language, and in fact feel like learning Haskell made me a much better Ruby programmer.
I have about 6 years of ruby experience and if you're saying that ruby has "easily predictable syntax"...
You really should try lisp. I liked clojure a lot coming from ruby because it has a lot of nice ergonomics other lisps lack. I think youd get a lot out of it.
Ruby and LISP have a lot of overlap. To my money, LISP is a little more predictable because the polymorphic nature of the language itself is always in your face; you know that you're always staring at a list, and you have no idea without context whether that list is being evaluated at runtime, is literal, or is the body of a macro.
Ruby has all those features but (to my personal taste) makes it less obvious that things are that wilding.
(But in both languages I get to play the game "Where the hell is this function or variable defined?" way more often than I want to. There are some advantages to languages that have a strict rule about modular encapsulation and requiring almost everything into the current context... With Rails, in particular, I find it hard to understand other people's code because I never know if a given symbol was defined in another file in the codebase, defined in a library, or magicked into being by doing string transformations on a data source... In C++, I have to rely on grep a lot to find definitions, but in Ruby on Rails not even grep is likely to find me the answer I want! Common LISP is similarly flexible with a lot of common library functions that magick new symbols into existence, but the codebases I work on in LISP aren't as large as the Ruby on Rails codebases I touch, so it bites me less).
Go back to the source, use Smalltalk in a nice environment like VisualWorks and get all that built in :-)
Common Lisp has compilers that produce fast code.
On this topic: My absolute favorite Common Lisp special operator is `(the`. Usage, `(the value-type form`, as in `(the integer (do-a-bunch-of-math))`.
At first glance, it looks like your strong-typing tool. And it can be. You can build a static analyzer that will match, as best it can, the type of the form to the value-type and throw errors if they don't match. It can also be a runtime check; the runtime is allowed to treat `the` as an assert and throw an error if there's a mismatch.
But what the spec actually says is that it's a special operator that does nothing but return the evaluation of the form if the form's value is of the right type and the behavior is undefined otherwise. So relative to, say, C++, it's a clean entrypoint for undefined behavior; it signals to a Lisp compiler or interpreter "Hey, the programmer allows you to do shenanigans here to make the code go faster. Throw away the runtime type identifier, re-represent the data in a faster bit-pattern, slam this value into functions without runtime checks, assume particular optimizations will succeed, paint it red to make it go fasta... Just, go nuts."
If you want something similar to Ruby but more functional, try Elixir. The similarities are superficial but might be enough to ease you in.
Haskell is weird. You can express well defined problems with relative ease and clarity, but performance can be kind of wonky and there's a lot more ceremony than your typical Lisp or Scheme or close relative of those. F# can give you a more lispish experience with a threshold about as low as Haskell, but comes with the close ties to an atrocious corporation and similar to Clojure it's not exactly a first class citizen in the environment.
Building stuff in Lisp-likes typically don't entail the double programming languages of the primary one and a second one to program the type system, in that way they're convenient in a way similar to Ruby. I like the parens and how they explicitly delimit portions that quite closely relate to the AST-step in compilation or whatever the interpreter sees, it helps with moving things around and molding the code quickly.
Given that code is mostly written by LLMs now (or will be soon) isn't it better to just use the best language that fits these requirements:
- LLM well trained on it. - Easy for human team to review. - Meets performance requirements.
Prob not lisp?
Llm content is trained in popularity. If we use that as a metric there will never be any improvements or changes again.
how is that anywhere close to a given?