@screwlisp is having some site connectivity problems so asked me to remind everyone that we'll be on the anonradio forum at the top of the hour (a bit less than ten minutes hence) for those who like that kind of thing:
-
@nosrednayduj @screwlisp @cdegroot
First, thanks for raising that example. It's interesting and contains info I hadn't heard.
In a way, it underscores my point: that for a while, it was an open question whether we could implement GC, but a bet was made that we could.
You could view that as saying they only implemented part of Lisp, and that the malloc stuff was a stepping out of paradigm, an admission the bet was failing for them in that moment. Or you could view it as a success, saying that even though some limping was required of Lisps while we refined the points, it was done.
As I recall, there was some discussion of adding a GC function. At the time, the LispM people probably said "which GC would it invoke" and the Gensym people probably said "we don't have one". That was the kind of complexity that the ANSI process turned up and it's probably why there is no GC function. (There was one in Maclisp that invoked the Mark/Sweep GC, but the situation had become more complicated.)
Also, as an aside, a personal observation about the process: With GC, as with other things like buffered streams, one of the hardest things to get agreement on was something where one party wanted a feature and another said "we don't have that, I'd have to make it a no-op". Making it a no-op was not a lot of implementation work. Just seeing and discarding an arg. But it complicated the story that was told, and vendors didn't like it, so they pushed back even though of all the implementations they had the easiest path (if you didn't count "explaining" as part of the path).
@nosrednayduj @screwlisp @cdegroot
And, unrelated, another reference I made in the show as to Clyde Prestowitz and book The Betrayal of American Prosperity.
https://www.goodreads.com/book/show/8104391-the-betrayal-of-american-prosperityAlso an essay I wrote that summarizes a key point from it, though not really related to the topic of the show. I mention it just because that point will also be interesting maybe to this audience on the issue of capitalism if not on the specific economic issue we were talking about tonight:
https://netsettlement.blogspot.com/2012/01/losing-war-in-quiet-room.html -
@loke@functional.cafe ooh, that is interesting, thanks! I did not know that Kotlin also had that feature (in a limited way).
Yes, the performance hit probably comes from copying the stack or restoring the stack. For small stacks this is trivial, but often times continuations are useful when computing recursive functions over very large data structures, and you usually have very large stacks for these kinds of computations.
Delimited continuations (DCs) can help with that problem, apparently. And the API for DCs also happens to make them more composable with each other, since you can kind-of unfreeze a computation inside of another frozen computation.
That might be why Kotlin has those restrictions on continuations.
@kentpitman@climatejustice.social @screwlisp@gamerplus.org @cdegroot@mstdn.ca
@ramin_hal9001 @kentpitman @screwlisp @cdegroot I didn't research it too much, but I think the reason is that when you have a function marked as suspend, it will always pass along an implicit extra argument which is the continuation. I also believe there is a dispatch block at the beginning of a function that can suspend that looks at the continuation to jump to the right part of the code. This is because code running on the JVM cannot directly manipulate the stack.
I don't know how it's implemented when you compile Kotlin to other targets. The semantics are the same, but the underlying implementation may be different.
-
@screwlisp is having some site connectivity problems so asked me to remind everyone that we'll be on the anonradio forum at the top of the hour (a bit less than ten minutes hence) for those who like that kind of thing:
https://anonradio.net:8443/anonradio
He'll also be monitoring LambdaMOO at "telnet lambda.moo.mud.org 8888" for those who do that kind of thing. there are also emacs clients you should get if you're REALLY using telnet.
Topic for today, I'm told, may include the climate, the war, the oil price hikes, some rambles I've recently posted on CLIM, and the book by @cdegroot called The Genius of Lisp, which we'll also revisit again next week.
-
At the end of @screwlisp's show, in the discussion of @cdegroot's book, @ramin_hal9001 was talking about continuations. I wanted to make a random point that isn't often made about Lisp that I think is important.
I often do binary partitions of languages (like the static/dynamic split, but more exotic), and one of them is whether they are leading or following, let's say. there are some aspects in which scheme is a follower, not a leader, in the sense that it tends to eschew some things that Common Lisp does for a variety of reasons, but one of them is "we don't know how to compile this well". There is a preference for a formal semantics that is very tight and that everything is well-understood. It is perhaps fortunate that Scheme came along after garbage collection was well-worked and did not seem to fear that it would be a problem, but I would say that Lisp had already basically dealt led on garbage collection.
The basic issue is this: Should a language incorporate things that maybe are not really well-understood but just because people need to do them and on an assumption that they might as well standardize the 'gesture' (to use the CLIM terminology) or 'notation' (to use the more familiar) for saying you want to do that thing.
Scheme did not like Lisp macros, for example, and only adopted macros when hygienic macros were worked out. Lisp, on the other hand, started with the idea that macros were just necessary and worried about the details of making them sound later.
Scheme people (and I'm generalizing to make a point here, with apologies for casting an entire group with a broad brush that is probably unfair) think Common Lisp macros more unhygienic than they actually are because they don't give enough credit to things like he package system, which Scheme does not have, and which protects CL users a lot more than they give credit for in avoiding collisions. They also don't fairly understand the degree to which Lisp2 protects from the most common scenarios that would happen all the time in Scheme if there were a symbol-based macro system. So CL isn't really as much at risk these days, but it was a bigger issue before packages, and the point is that Lisp decided it would figure out how to tighten later, but that it was too important to leave out, where Scheme held back design until it knew.
But, and this is where I wanted to get to, Scheme led on continuations. That's a hard problem and while it's possible, it's still difficult. I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did. But it was brave to say that full continuations could be made adequately efficient.
And the Lisp community in general, and here I will include Scheme in that, though on other days I think these communities sufficiently different that I would not, have collectively been much more brave and leading than many languages, which only grudgingly allow functionality that they know how to compile.
In the early days of Lisp, the choice to do dynamic memory management was very brave. It took a long time to make GC's efficient, and generational GC was what finally I think made people believe this could be done well in large address spaces. (In small address spaces, it was possible because touching all the memory to do a GC did not introduce thrashing if data was "paged out". And in modern hardware, memory is cheap, so the size is not always a per se issue.
But there was an intermediate time in which lots of memory was addressable but not fully realized as RAM, only virtualized, and GC was a mess in that space.
The Lisp Machines had 3 different unrelated but co-resident and mutually usable garbage collection strategies that could be separately enabled, 2 of them using hardware support (typed pointers) and one of them requiring that computation cease for a while because the virtual machine would be temporarily inconsistent for the last-ditch thing that particular GC could do to save the day when otherwise things were going to fail badly.
For a while, dynamic memory management would not be used in real time applications, but ultimately the bet Lisp had made on it proved that it could be done, and it drove the doing of it in a way that holding back would not have.
My (possibly faulty) understanding is that the Java GC was made to work by at least some displaced Lisp GC experts, for example. But certainly the choice to make Java be garbage collected probably derives from the Lispers on its design team feeling it was by then a solved problem.
This aspect of languages' designs, whether they lead or follow, whether they are brave or timid, is not often talked about. But i wanted to give the idea some air. It's cool to have languages that can use existing tech well, but cooler I personally think to see designers consciously driving the creation of such tech.
@kentpitman @screwlisp @cdegroot @ramin_hal9001
Generational GC changes the way you program and it's not *just* that it's efficient.
We used MIT-Scheme (which, by the early 90s was showing its age). We did all manner of weird optimizing to use memory efficiently. Lots of set! to re-use structure where possible. Or (map! f list) -- same as (map...) but with set-car! to modify in-place -- because it made a HUGE difference not recreating all of those cons cells => bumps memory use => next GC round is that much sooner (and then everything STOPS, because Mark & Sweep). Also stupid (fluid-let ...) tricks to save space in closures.
We were writing Scheme as if it were C because that was how you got speed in that particular world.
1/3
-
@kentpitman @screwlisp @cdegroot @ramin_hal9001
Generational GC changes the way you program and it's not *just* that it's efficient.
We used MIT-Scheme (which, by the early 90s was showing its age). We did all manner of weird optimizing to use memory efficiently. Lots of set! to re-use structure where possible. Or (map! f list) -- same as (map...) but with set-car! to modify in-place -- because it made a HUGE difference not recreating all of those cons cells => bumps memory use => next GC round is that much sooner (and then everything STOPS, because Mark & Sweep). Also stupid (fluid-let ...) tricks to save space in closures.
We were writing Scheme as if it were C because that was how you got speed in that particular world.
1/3
@kentpitman @screwlisp @cdegroot @ramin_hal9001
And then Bruce Duba joined the group (had just come from Indiana).
"Guys, you're doing this ALL WRONG",
"Yeah, we know already. It's ugly, impure, and sucks. But it's faster, unfortunately",
"No, you need a better Scheme; you should try Chez".
...and, to be sure, just that much *was* a significant improvement. Chez was much more actively maintained, had a better repertoire of optimizations, etc...
... but the real eye-opener was what happened when we ripped out all of the set! and fluid-let code. That's when we got the multiple-orders-of-magnitude speed improvement.
2/3
-
@kentpitman @screwlisp @cdegroot @ramin_hal9001
And then Bruce Duba joined the group (had just come from Indiana).
"Guys, you're doing this ALL WRONG",
"Yeah, we know already. It's ugly, impure, and sucks. But it's faster, unfortunately",
"No, you need a better Scheme; you should try Chez".
...and, to be sure, just that much *was* a significant improvement. Chez was much more actively maintained, had a better repertoire of optimizations, etc...
... but the real eye-opener was what happened when we ripped out all of the set! and fluid-let code. That's when we got the multiple-orders-of-magnitude speed improvement.
2/3
@kentpitman @screwlisp @cdegroot @ramin_hal9001
See, setq/set! is a total disaster for generational GC. It bashes old-space cells to point to new-space; the premise of generational GC being that this mostly shouldn't happen. The super-often new-generation-only pass is now doing a whole lot of old-space traversal because of all of those cells added to the root set by the set! calls, ... which then loses most of the benefit of generational GC.
(fluid-let and dynamic-wind also became way LESS cheap, mainly due to missing multiple optimization opportunities)
In short, with generational GC, straightforward side-effect-free code wins. It took a while for me to recalibrate my intuitions re what sorts of things were fast/cheap vs not.
3/3
-
@kentpitman @screwlisp @cdegroot @ramin_hal9001
See, setq/set! is a total disaster for generational GC. It bashes old-space cells to point to new-space; the premise of generational GC being that this mostly shouldn't happen. The super-often new-generation-only pass is now doing a whole lot of old-space traversal because of all of those cells added to the root set by the set! calls, ... which then loses most of the benefit of generational GC.
(fluid-let and dynamic-wind also became way LESS cheap, mainly due to missing multiple optimization opportunities)
In short, with generational GC, straightforward side-effect-free code wins. It took a while for me to recalibrate my intuitions re what sorts of things were fast/cheap vs not.
3/3
@mdhughes
> you should try chez
@wrog @kentpitman @cdegroot @ramin_hal9001 -
@kentpitman @screwlisp @cdegroot @ramin_hal9001
See, setq/set! is a total disaster for generational GC. It bashes old-space cells to point to new-space; the premise of generational GC being that this mostly shouldn't happen. The super-often new-generation-only pass is now doing a whole lot of old-space traversal because of all of those cells added to the root set by the set! calls, ... which then loses most of the benefit of generational GC.
(fluid-let and dynamic-wind also became way LESS cheap, mainly due to missing multiple optimization opportunities)
In short, with generational GC, straightforward side-effect-free code wins. It took a while for me to recalibrate my intuitions re what sorts of things were fast/cheap vs not.
3/3
@kentpitman @screwlisp @cdegroot @ramin_hal9001
There were other weirdnesses as well.
Even if GC saves you the horror of referencing freed storage, or freeing stuff twice, you still have to worry about memory leaks and moreover, dropping references as fast as you can matters
With copying GC, leaks are useless shit that has to be copied -- yes it eventually ends up in an old generation but until then it's getting copied -- and copying is where generational GC is doing work, and it's stuff unnecessarily surviving to the medium term that hurts you the most (generational GC *relies* on stuff becoming garbage as quickly as possible)
And so, tracking down leaks and finding places to put in weak pointers started mattering more...
4/3
-
@kentpitman @screwlisp @cdegroot @ramin_hal9001
There were other weirdnesses as well.
Even if GC saves you the horror of referencing freed storage, or freeing stuff twice, you still have to worry about memory leaks and moreover, dropping references as fast as you can matters
With copying GC, leaks are useless shit that has to be copied -- yes it eventually ends up in an old generation but until then it's getting copied -- and copying is where generational GC is doing work, and it's stuff unnecessarily surviving to the medium term that hurts you the most (generational GC *relies* on stuff becoming garbage as quickly as possible)
And so, tracking down leaks and finding places to put in weak pointers started mattering more...
4/3
@wrog
Did you see the garbage collection handbook's note on performance depending on having about five times as much memory as was technically needed? @dougmerritt
@kentpitman @cdegroot @ramin_hal9001 -
@wrog
Did you see the garbage collection handbook's note on performance depending on having about five times as much memory as was technically needed? @dougmerritt
@kentpitman @cdegroot @ramin_hal9001@screwlisp @kentpitman @cdegroot @ramin_hal9001 @dougmerritt
5? maybe for mark&sweep
but I can't see how more than 2 would ever be necessary for a copying GC. Once you have enough space to copy everything *to* (on the off-chance that absolutely everything actually *needs* to be copied), you're basically done...
... and if you're following the usual pattern where 90% of what you create becomes garbage almost immediately, you can get by with far less.
-
@nosrednayduj @screwlisp @cdegroot
And, unrelated, another reference I made in the show as to Clyde Prestowitz and book The Betrayal of American Prosperity.
https://www.goodreads.com/book/show/8104391-the-betrayal-of-american-prosperityAlso an essay I wrote that summarizes a key point from it, though not really related to the topic of the show. I mention it just because that point will also be interesting maybe to this audience on the issue of capitalism if not on the specific economic issue we were talking about tonight:
https://netsettlement.blogspot.com/2012/01/losing-war-in-quiet-room.html@nosrednayduj @screwlisp @cdegroot
Also Naomi Klein's book The Shock Doctrine, very politically relevant this week, traces a lot of political ills to Milton Friedman and his ideas.
The Shock Doctrine: The Rise of Disaster Capitalism
Read 4,617 reviews from the world’s largest community for readers. In her ground-breaking reporting from Iraq, Naomi Klein exposed how the trauma of invasi…
Goodreads (www.goodreads.com)
-
@screwlisp @kentpitman @cdegroot @ramin_hal9001 @dougmerritt
5? maybe for mark&sweep
but I can't see how more than 2 would ever be necessary for a copying GC. Once you have enough space to copy everything *to* (on the off-chance that absolutely everything actually *needs* to be copied), you're basically done...
... and if you're following the usual pattern where 90% of what you create becomes garbage almost immediately, you can get by with far less.
@wrog@mastodon.murkworks.net Haskell was first invented in 1990 or 91ish, and at that time they had already started to ask questions like, "what if we just ban
set!entirely," abolish mutable variables, make everything lazily evaluated by default. If you have been programming in C/C++ for a while, that abolishing mutable variables would lead to a performance increase seems very counter-intuitive.But for all the reasons you mentioned about not forcing a search for updated pointers in old-generation GC heaps, and also the fact that this forces the programmer to write their source code such that it is essentially already in the Static-Single-Assignment (SSA) form, which is nowadays an optimization pass that most compilers do prior to register allocation, this allowed for more aggressive optimization to be used and results in more efficient code.
@screwlisp@gamerplus.org @kentpitman@climatejustice.social @cdegroot@mstdn.ca @dougmerritt@mathstodon.xyz
-
@kentpitman @screwlisp @cdegroot @ramin_hal9001
See, setq/set! is a total disaster for generational GC. It bashes old-space cells to point to new-space; the premise of generational GC being that this mostly shouldn't happen. The super-often new-generation-only pass is now doing a whole lot of old-space traversal because of all of those cells added to the root set by the set! calls, ... which then loses most of the benefit of generational GC.
(fluid-let and dynamic-wind also became way LESS cheap, mainly due to missing multiple optimization opportunities)
In short, with generational GC, straightforward side-effect-free code wins. It took a while for me to recalibrate my intuitions re what sorts of things were fast/cheap vs not.
3/3
@wrog @kentpitman @screwlisp @ramin_hal9001 it's a good chunk of the reason why Erlang shines here. Per-process GC can be kept simple (a process is more like an object than a thread, so you have lots of them) and no equivalent of setq - all data is immutable.
(there is a shared heap, but that also is just immutable data).
-
R relay@relay.mycrowd.ca shared this topic
-
@wrog @kentpitman @screwlisp @ramin_hal9001 it's a good chunk of the reason why Erlang shines here. Per-process GC can be kept simple (a process is more like an object than a thread, so you have lots of them) and no equivalent of setq - all data is immutable.
(there is a shared heap, but that also is just immutable data).
@cdegroot@mstdn.ca yes, the BEAM virtual machine is pretty amazing technology, there are very good reasons why it is used in telecom, or other scenarios where zero downtime is a priority. I think .NET and Graal are have been slowly incorporating more of BEAM's features into their own runtimes. Since about 3 years ago .NET can do "hot code reloading," for example.
I have used Erlang before but not Elixer. I think I would like Elixer better because of it's slighly-more-Haskell-like type system.
@wrog@mastodon.murkworks.net @kentpitman@climatejustice.social @screwlisp@gamerplus.org
-
@cdegroot@mstdn.ca yes, the BEAM virtual machine is pretty amazing technology, there are very good reasons why it is used in telecom, or other scenarios where zero downtime is a priority. I think .NET and Graal are have been slowly incorporating more of BEAM's features into their own runtimes. Since about 3 years ago .NET can do "hot code reloading," for example.
I have used Erlang before but not Elixer. I think I would like Elixer better because of it's slighly-more-Haskell-like type system.
@wrog@mastodon.murkworks.net @kentpitman@climatejustice.social @screwlisp@gamerplus.org
@ramin_hal9001 @kentpitman @screwlisp @wrog not just zero downtime, the more important aspect is how it does concurrency, how it manages to scale that, and how well it fits the modern requirements of "webapps" (like a glove).
It changed my thinking about objects, just like Smalltalk did before. I'm fully on board with Joe Armstrong's quip that Erlang is "the most OO language" (or something to that extent); having objects with effectively their own address space, their own processor scheduling, etc, completely changes how you think about building scalable concurrent systems (and _then_ you get clustering for free, and sometimes hot reloading is a production thing, although 99% of the time it is good to have it in the REPL)
-
@screwlisp @kentpitman @cdegroot @ramin_hal9001 @dougmerritt
5? maybe for mark&sweep
but I can't see how more than 2 would ever be necessary for a copying GC. Once you have enough space to copy everything *to* (on the off-chance that absolutely everything actually *needs* to be copied), you're basically done...
... and if you're following the usual pattern where 90% of what you create becomes garbage almost immediately, you can get by with far less.
@wrog
> but I can't see how more than 2 would ever be necessary for a copying GCIt's not "necessary", it's "to make GC performance a negligeable percentage of overall CPU".
It was about a theoretical worst case as I recall, certainly not about one particular algorithm.
And IIRC it was actually a factor of 7 -- 5 is merely a good mnemonic which may be close enough. (e.g. perhaps 5-fold keeps overhead down to 10-20% rather than 7's 1%, although I'm making it up to give the flavor -- I haven't read the book for 10-20 years)
But see the book (may as well use the second edition) if and when you care; it's excellent. Mandatory I would say, for anyone who wants to really really understand all aspects of garbage collection, including performance issues.
-
At the end of @screwlisp's show, in the discussion of @cdegroot's book, @ramin_hal9001 was talking about continuations. I wanted to make a random point that isn't often made about Lisp that I think is important.
I often do binary partitions of languages (like the static/dynamic split, but more exotic), and one of them is whether they are leading or following, let's say. there are some aspects in which scheme is a follower, not a leader, in the sense that it tends to eschew some things that Common Lisp does for a variety of reasons, but one of them is "we don't know how to compile this well". There is a preference for a formal semantics that is very tight and that everything is well-understood. It is perhaps fortunate that Scheme came along after garbage collection was well-worked and did not seem to fear that it would be a problem, but I would say that Lisp had already basically dealt led on garbage collection.
The basic issue is this: Should a language incorporate things that maybe are not really well-understood but just because people need to do them and on an assumption that they might as well standardize the 'gesture' (to use the CLIM terminology) or 'notation' (to use the more familiar) for saying you want to do that thing.
Scheme did not like Lisp macros, for example, and only adopted macros when hygienic macros were worked out. Lisp, on the other hand, started with the idea that macros were just necessary and worried about the details of making them sound later.
Scheme people (and I'm generalizing to make a point here, with apologies for casting an entire group with a broad brush that is probably unfair) think Common Lisp macros more unhygienic than they actually are because they don't give enough credit to things like he package system, which Scheme does not have, and which protects CL users a lot more than they give credit for in avoiding collisions. They also don't fairly understand the degree to which Lisp2 protects from the most common scenarios that would happen all the time in Scheme if there were a symbol-based macro system. So CL isn't really as much at risk these days, but it was a bigger issue before packages, and the point is that Lisp decided it would figure out how to tighten later, but that it was too important to leave out, where Scheme held back design until it knew.
But, and this is where I wanted to get to, Scheme led on continuations. That's a hard problem and while it's possible, it's still difficult. I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did. But it was brave to say that full continuations could be made adequately efficient.
And the Lisp community in general, and here I will include Scheme in that, though on other days I think these communities sufficiently different that I would not, have collectively been much more brave and leading than many languages, which only grudgingly allow functionality that they know how to compile.
In the early days of Lisp, the choice to do dynamic memory management was very brave. It took a long time to make GC's efficient, and generational GC was what finally I think made people believe this could be done well in large address spaces. (In small address spaces, it was possible because touching all the memory to do a GC did not introduce thrashing if data was "paged out". And in modern hardware, memory is cheap, so the size is not always a per se issue.
But there was an intermediate time in which lots of memory was addressable but not fully realized as RAM, only virtualized, and GC was a mess in that space.
The Lisp Machines had 3 different unrelated but co-resident and mutually usable garbage collection strategies that could be separately enabled, 2 of them using hardware support (typed pointers) and one of them requiring that computation cease for a while because the virtual machine would be temporarily inconsistent for the last-ditch thing that particular GC could do to save the day when otherwise things were going to fail badly.
For a while, dynamic memory management would not be used in real time applications, but ultimately the bet Lisp had made on it proved that it could be done, and it drove the doing of it in a way that holding back would not have.
My (possibly faulty) understanding is that the Java GC was made to work by at least some displaced Lisp GC experts, for example. But certainly the choice to make Java be garbage collected probably derives from the Lispers on its design team feeling it was by then a solved problem.
This aspect of languages' designs, whether they lead or follow, whether they are brave or timid, is not often talked about. But i wanted to give the idea some air. It's cool to have languages that can use existing tech well, but cooler I personally think to see designers consciously driving the creation of such tech.
@kentpitman
I respect you, and your contributions to Lisp and the community. So I dislike nitpicking you. But:> Common Lisp macros more unhygienic than they actually are
This is a biased phrasing. There are hygenic macro systems, and unhygenic macro systems. One cannot assign a degree of "hygenic-ness" without simultaneously defining what metric you are introducing.
We all can agree that one can produce great code in Common Lisp. It's not like Scheme is *necessary* for that.
But de gustibus non est disputandum. There are objective qualities of various macro systems -- and then there's people's preferences about those qualities.
Bottom line: it seems you are saying that Lisp macros aren't so bad if their use is constrained to safe uses, and I would agree with *that*.
-
At the end of @screwlisp's show, in the discussion of @cdegroot's book, @ramin_hal9001 was talking about continuations. I wanted to make a random point that isn't often made about Lisp that I think is important.
I often do binary partitions of languages (like the static/dynamic split, but more exotic), and one of them is whether they are leading or following, let's say. there are some aspects in which scheme is a follower, not a leader, in the sense that it tends to eschew some things that Common Lisp does for a variety of reasons, but one of them is "we don't know how to compile this well". There is a preference for a formal semantics that is very tight and that everything is well-understood. It is perhaps fortunate that Scheme came along after garbage collection was well-worked and did not seem to fear that it would be a problem, but I would say that Lisp had already basically dealt led on garbage collection.
The basic issue is this: Should a language incorporate things that maybe are not really well-understood but just because people need to do them and on an assumption that they might as well standardize the 'gesture' (to use the CLIM terminology) or 'notation' (to use the more familiar) for saying you want to do that thing.
Scheme did not like Lisp macros, for example, and only adopted macros when hygienic macros were worked out. Lisp, on the other hand, started with the idea that macros were just necessary and worried about the details of making them sound later.
Scheme people (and I'm generalizing to make a point here, with apologies for casting an entire group with a broad brush that is probably unfair) think Common Lisp macros more unhygienic than they actually are because they don't give enough credit to things like he package system, which Scheme does not have, and which protects CL users a lot more than they give credit for in avoiding collisions. They also don't fairly understand the degree to which Lisp2 protects from the most common scenarios that would happen all the time in Scheme if there were a symbol-based macro system. So CL isn't really as much at risk these days, but it was a bigger issue before packages, and the point is that Lisp decided it would figure out how to tighten later, but that it was too important to leave out, where Scheme held back design until it knew.
But, and this is where I wanted to get to, Scheme led on continuations. That's a hard problem and while it's possible, it's still difficult. I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did. But it was brave to say that full continuations could be made adequately efficient.
And the Lisp community in general, and here I will include Scheme in that, though on other days I think these communities sufficiently different that I would not, have collectively been much more brave and leading than many languages, which only grudgingly allow functionality that they know how to compile.
In the early days of Lisp, the choice to do dynamic memory management was very brave. It took a long time to make GC's efficient, and generational GC was what finally I think made people believe this could be done well in large address spaces. (In small address spaces, it was possible because touching all the memory to do a GC did not introduce thrashing if data was "paged out". And in modern hardware, memory is cheap, so the size is not always a per se issue.
But there was an intermediate time in which lots of memory was addressable but not fully realized as RAM, only virtualized, and GC was a mess in that space.
The Lisp Machines had 3 different unrelated but co-resident and mutually usable garbage collection strategies that could be separately enabled, 2 of them using hardware support (typed pointers) and one of them requiring that computation cease for a while because the virtual machine would be temporarily inconsistent for the last-ditch thing that particular GC could do to save the day when otherwise things were going to fail badly.
For a while, dynamic memory management would not be used in real time applications, but ultimately the bet Lisp had made on it proved that it could be done, and it drove the doing of it in a way that holding back would not have.
My (possibly faulty) understanding is that the Java GC was made to work by at least some displaced Lisp GC experts, for example. But certainly the choice to make Java be garbage collected probably derives from the Lispers on its design team feeling it was by then a solved problem.
This aspect of languages' designs, whether they lead or follow, whether they are brave or timid, is not often talked about. But i wanted to give the idea some air. It's cool to have languages that can use existing tech well, but cooler I personally think to see designers consciously driving the creation of such tech.
@kentpitman
> I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did.My memory is that the Scheme interface for continuations was completely worked out when Scheme was born, but implementation issues were not -- beyond existence proof that is.
> But it was brave to say that full continuations could be made adequately efficient.
Yes it was!
> the Lisp community in general, and here I will include Scheme in that
Planner, for instance, went in a quite different direction. Micro-Planner (and its SHRDLU) inspired Prolog. Robert Kowalski said that "Prolog is what Planner should have been" (it included unification but excluded pattern-directed invocation for example), see Kowalski, R. (1988). “Logic Programming.” Communications of the ACM, 31(9) -- although the precise phrasing I think is from interviews.
Anyway, Prolog was not a Lisp, but sure, definitely Scheme is. The history of Lisp spinoffs created quite a bit of CS history.
I did professional development in Scheme (at Autodesk, before that division was axed
-- it's certainly a workable language in the real world.But we know that Common Lisp is too, obviously.
-
@kentpitman @screwlisp @cdegroot @ramin_hal9001
Generational GC changes the way you program and it's not *just* that it's efficient.
We used MIT-Scheme (which, by the early 90s was showing its age). We did all manner of weird optimizing to use memory efficiently. Lots of set! to re-use structure where possible. Or (map! f list) -- same as (map...) but with set-car! to modify in-place -- because it made a HUGE difference not recreating all of those cons cells => bumps memory use => next GC round is that much sooner (and then everything STOPS, because Mark & Sweep). Also stupid (fluid-let ...) tricks to save space in closures.
We were writing Scheme as if it were C because that was how you got speed in that particular world.
1/3
@wrog
'setq' and friends have been criticized forever, but avoiding mutation is easier said than done. Parsing arbitrarily large sexpr's requires mutation behind the scenes -- which ideally is where it should stay.Any language we use that helps avoid mutation is a good thing. 100% avoidance is a matter of opinion -- some people claim it was proven to be fully avoidable decades ago, others say the jury is still out on the 100% part.
I don't know enough to have an opinion on whether 100% has been completely proven, but it's attractive.
-
At the end of @screwlisp's show, in the discussion of @cdegroot's book, @ramin_hal9001 was talking about continuations. I wanted to make a random point that isn't often made about Lisp that I think is important.
I often do binary partitions of languages (like the static/dynamic split, but more exotic), and one of them is whether they are leading or following, let's say. there are some aspects in which scheme is a follower, not a leader, in the sense that it tends to eschew some things that Common Lisp does for a variety of reasons, but one of them is "we don't know how to compile this well". There is a preference for a formal semantics that is very tight and that everything is well-understood. It is perhaps fortunate that Scheme came along after garbage collection was well-worked and did not seem to fear that it would be a problem, but I would say that Lisp had already basically dealt led on garbage collection.
The basic issue is this: Should a language incorporate things that maybe are not really well-understood but just because people need to do them and on an assumption that they might as well standardize the 'gesture' (to use the CLIM terminology) or 'notation' (to use the more familiar) for saying you want to do that thing.
Scheme did not like Lisp macros, for example, and only adopted macros when hygienic macros were worked out. Lisp, on the other hand, started with the idea that macros were just necessary and worried about the details of making them sound later.
Scheme people (and I'm generalizing to make a point here, with apologies for casting an entire group with a broad brush that is probably unfair) think Common Lisp macros more unhygienic than they actually are because they don't give enough credit to things like he package system, which Scheme does not have, and which protects CL users a lot more than they give credit for in avoiding collisions. They also don't fairly understand the degree to which Lisp2 protects from the most common scenarios that would happen all the time in Scheme if there were a symbol-based macro system. So CL isn't really as much at risk these days, but it was a bigger issue before packages, and the point is that Lisp decided it would figure out how to tighten later, but that it was too important to leave out, where Scheme held back design until it knew.
But, and this is where I wanted to get to, Scheme led on continuations. That's a hard problem and while it's possible, it's still difficult. I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did. But it was brave to say that full continuations could be made adequately efficient.
And the Lisp community in general, and here I will include Scheme in that, though on other days I think these communities sufficiently different that I would not, have collectively been much more brave and leading than many languages, which only grudgingly allow functionality that they know how to compile.
In the early days of Lisp, the choice to do dynamic memory management was very brave. It took a long time to make GC's efficient, and generational GC was what finally I think made people believe this could be done well in large address spaces. (In small address spaces, it was possible because touching all the memory to do a GC did not introduce thrashing if data was "paged out". And in modern hardware, memory is cheap, so the size is not always a per se issue.
But there was an intermediate time in which lots of memory was addressable but not fully realized as RAM, only virtualized, and GC was a mess in that space.
The Lisp Machines had 3 different unrelated but co-resident and mutually usable garbage collection strategies that could be separately enabled, 2 of them using hardware support (typed pointers) and one of them requiring that computation cease for a while because the virtual machine would be temporarily inconsistent for the last-ditch thing that particular GC could do to save the day when otherwise things were going to fail badly.
For a while, dynamic memory management would not be used in real time applications, but ultimately the bet Lisp had made on it proved that it could be done, and it drove the doing of it in a way that holding back would not have.
My (possibly faulty) understanding is that the Java GC was made to work by at least some displaced Lisp GC experts, for example. But certainly the choice to make Java be garbage collected probably derives from the Lispers on its design team feeling it was by then a solved problem.
This aspect of languages' designs, whether they lead or follow, whether they are brave or timid, is not often talked about. But i wanted to give the idea some air. It's cool to have languages that can use existing tech well, but cooler I personally think to see designers consciously driving the creation of such tech.
@kentpitman
> 2 of them using hardware support (typed pointers)I learned about typed pointers from Keith Sklower, from my brief involvement in the earliest days (1978?) of Berkeley's Franz Lisp (implemented in order to support the computer algebra Macsyma port to Vaxima), and it blew my mind. Horizons extended hugely.
A few years later everyone seemed to just take the idea in stride. Yet no one seems to comment on the impact on typed pointers made by big-endian versus little-endian architectures; everyone seems to regard it as a matter of taste. It's not always; it impacts low level implementations.