@screwlisp is having some site connectivity problems so asked me to remind everyone that we'll be on the anonradio forum at the top of the hour (a bit less than ten minutes hence) for those who like that kind of thing:
-
@kentpitman
> That is, that a transformation of expressional form is not, claims of Turing equivalence notwithstanding, cost-free both in terms of efficiency and in terms of expressional equivalence of the language. It has implications (positive or negative) any time you make such changes.I hope everyone here is already clear that "expressiveness" is something that comes along on *top* of a language's Turing equivalence.
Indeed Turing Machines (and pure typed and untyped lambda calculus and SKI combinatory calculus and so on) are all *dreadful* in terms of expressiveness.
And for that matter, expressiveness can be on top of Turing incomplete languages. Like chess notation; people argue that the algebraic notation is more expressive than the old descriptive notation. (People used to argue in the other direction)
@dougmerritt@mathstodon.xyz yes, I am maybe a little unclear in what I wrote. I tend to take shortcuts when I write about Scheme that make it seem I am equating it to the untyped lambda calculus.
I have heard of the Turing Tarpit. And I have inspected the intermediate representation of Haskell before (the Haskell Core), so I have seen first-hand that expressing things in lambda calculus makes the code almost impossible to understand.
@kentpitman@climatejustice.social @screwlisp@gamerplus.org @wrog@mastodon.murkworks.net @cdegroot@mstdn.ca
-
@kentpitman
> That is, that a transformation of expressional form is not, claims of Turing equivalence notwithstanding, cost-free both in terms of efficiency and in terms of expressional equivalence of the language. It has implications (positive or negative) any time you make such changes.I hope everyone here is already clear that "expressiveness" is something that comes along on *top* of a language's Turing equivalence.
Indeed Turing Machines (and pure typed and untyped lambda calculus and SKI combinatory calculus and so on) are all *dreadful* in terms of expressiveness.
And for that matter, expressiveness can be on top of Turing incomplete languages. Like chess notation; people argue that the algebraic notation is more expressive than the old descriptive notation. (People used to argue in the other direction)
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
[..it's possible I'm missing the point, but I'm going to launch anyway...]
I believe trying to define/formalize "expressiveness" is roughly as doomed as trying to define/formalize "intelligence". w.r.t. the latter, there's been nearly a century of bashing on this since Church and Turing and we're still no further along than "we know it when we see it"
(and I STILL think that was Turing's intended point in proposing his Test, i.e., if you can fool a human into thinking it's intelligent, you're done; that this is the only real test we've ever had is a testament to how ill-defined the concept is...)
1/11
-
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
[..it's possible I'm missing the point, but I'm going to launch anyway...]
I believe trying to define/formalize "expressiveness" is roughly as doomed as trying to define/formalize "intelligence". w.r.t. the latter, there's been nearly a century of bashing on this since Church and Turing and we're still no further along than "we know it when we see it"
(and I STILL think that was Turing's intended point in proposing his Test, i.e., if you can fool a human into thinking it's intelligent, you're done; that this is the only real test we've ever had is a testament to how ill-defined the concept is...)
1/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
The point of Turing equivalence is that even though we have different forms for expressing algorithms and there are apparently vast differences in comprehensibility, they all inter-translate, so any differences in what can utltimately be achieved by the various forms of expression is an illusion. We have, thus far, only one notion of computability.
(which is not to say there can't be others out there, but nobody's found them yet)
2/11
-
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
The point of Turing equivalence is that even though we have different forms for expressing algorithms and there are apparently vast differences in comprehensibility, they all inter-translate, so any differences in what can utltimately be achieved by the various forms of expression is an illusion. We have, thus far, only one notion of computability.
(which is not to say there can't be others out there, but nobody's found them yet)
2/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
I believe expressiveness is a cognition issue, i.e., having to do with how the human brian works and how we learn. If you train yourself to recognize certain kinds of patterns, then certain kinds of problems become easier to solve.
... and right there I've just summarized every mathematics, science, and programming curriiculum on the planet.What's "easy" depends on the patterns you've learned. The more patterns you know, the more problems you can solve. Every time you can express a set of patterns as sub-patterns of one big super-pattern small enough to keep in your head, that's a win.
I'm not actually sure there's anything more to "intelligence" than this.
3/11
-
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
The point of Turing equivalence is that even though we have different forms for expressing algorithms and there are apparently vast differences in comprehensibility, they all inter-translate, so any differences in what can utltimately be achieved by the various forms of expression is an illusion. We have, thus far, only one notion of computability.
(which is not to say there can't be others out there, but nobody's found them yet)
2/11
-
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
I believe expressiveness is a cognition issue, i.e., having to do with how the human brian works and how we learn. If you train yourself to recognize certain kinds of patterns, then certain kinds of problems become easier to solve.
... and right there I've just summarized every mathematics, science, and programming curriiculum on the planet.What's "easy" depends on the patterns you've learned. The more patterns you know, the more problems you can solve. Every time you can express a set of patterns as sub-patterns of one big super-pattern small enough to keep in your head, that's a win.
I'm not actually sure there's anything more to "intelligence" than this.
3/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
I still remember trying to teach my dad about recursion.
He was a research chemist. At some point he needed to do some hairy statistical computations that were a bit too much for the programmable calculators he had in his lab. Warner-Lambert research had just gotten some IBM mainframe -- this was early 1970s, and so he decided to learn FORTRAN -- and he became one of their local power-users.
Roughly in the same time-frame, 11-year-old me found a DEC-10 manual one of my brothers had brought home from college. It did languages.
Part 1 was FORTRAN.
Part 2 was Basic.But it was last section of the book that was the acid trip.
Part 3 was about Algol.
4/11
-
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
I still remember trying to teach my dad about recursion.
He was a research chemist. At some point he needed to do some hairy statistical computations that were a bit too much for the programmable calculators he had in his lab. Warner-Lambert research had just gotten some IBM mainframe -- this was early 1970s, and so he decided to learn FORTRAN -- and he became one of their local power-users.
Roughly in the same time-frame, 11-year-old me found a DEC-10 manual one of my brothers had brought home from college. It did languages.
Part 1 was FORTRAN.
Part 2 was Basic.But it was last section of the book that was the acid trip.
Part 3 was about Algol.
4/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
This was post-Algol-68, but evidently the DEC folks were not happy with Algol-68 (I found out later *nobody* was happy with Algol-68), so ... various footnotes about where they deviated from the spec; not that I had any reason to care at that point.
I encountered the recursive definition of factorial and I was like,
"That can't possibly work."
(the FORTRAN and Basic manuals were super clear about how each subprogram has its dedicated storage; calling one while it was still active is every bit an error like dividing by zero. You're just doing it wrong...)
5/11
-
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
This was post-Algol-68, but evidently the DEC folks were not happy with Algol-68 (I found out later *nobody* was happy with Algol-68), so ... various footnotes about where they deviated from the spec; not that I had any reason to care at that point.
I encountered the recursive definition of factorial and I was like,
"That can't possibly work."
(the FORTRAN and Basic manuals were super clear about how each subprogram has its dedicated storage; calling one while it was still active is every bit an error like dividing by zero. You're just doing it wrong...)
5/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
Then there was the section on call-by-name (the default parameter passing convention for Algol)
... including a half page on Jenson's Device, that, I should note, was presented COMPLETELY UN-IRONICALLY because this was still 1972,
as in, "Here's this neat trick that you'll want to know about."
And my reaction was, "WTFF, why???"
and also, "That can't possibly work, either."
Not having any actual computers to play with yet, that was that for a while.
Some years later, I got to college and had my first actual programming course...
6/11
-
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
Then there was the section on call-by-name (the default parameter passing convention for Algol)
... including a half page on Jenson's Device, that, I should note, was presented COMPLETELY UN-IRONICALLY because this was still 1972,
as in, "Here's this neat trick that you'll want to know about."
And my reaction was, "WTFF, why???"
and also, "That can't possibly work, either."
Not having any actual computers to play with yet, that was that for a while.
Some years later, I got to college and had my first actual programming course...
6/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
... in Pascal.. And there I finally learned about and was able to get used to using recursion.
Although I'd say I didn't *really* get it until the following semester taking the assembler course and learning about *stacks*.
It was like recursion was sufficiently weird that I didn't really want to trust it until/unless I had a sense of what was actually happening under the hood,
And THEN it was cool.
7/11
-
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
... in Pascal.. And there I finally learned about and was able to get used to using recursion.
Although I'd say I didn't *really* get it until the following semester taking the assembler course and learning about *stacks*.
It was like recursion was sufficiently weird that I didn't really want to trust it until/unless I had a sense of what was actually happening under the hood,
And THEN it was cool.
7/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
To the point where, the following summer as an intern, I was needing to write a tree walk, and I wrote it in FORTRAN — because that's what was available at AT&T Basking Ridge (long story) — using fake recursion (local vars get dimensions as arrays, every call/return becomes a computed goto, you get the idea…) because I wanted to see if this *could* actually be done in FORTRAN, and it could, and it worked, and there was much rejoicing; I think my supervisor (who, to be fair, was not really a programmer) blue-screened on that one.
And *then* I tried to explain it all to my dad...
8/11
-
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
To the point where, the following summer as an intern, I was needing to write a tree walk, and I wrote it in FORTRAN — because that's what was available at AT&T Basking Ridge (long story) — using fake recursion (local vars get dimensions as arrays, every call/return becomes a computed goto, you get the idea…) because I wanted to see if this *could* actually be done in FORTRAN, and it could, and it worked, and there was much rejoicing; I think my supervisor (who, to be fair, was not really a programmer) blue-screened on that one.
And *then* I tried to explain it all to my dad...
8/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
And, to be fair, by then, he had changed jobs/companies, moved up to the bottom tier of management, wasn't using The Computer anymore, so maybe the interest had waned.
But it struck me that I was never able to get past showing him the factorial function and,
"That can't possibly work."
He had basically accepted the FORTRAN model of things and that was that.
Later, when he retired he got one of the early PC clones and then spent vast amounts of time messing with spreadsheets.
9/11
-
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
To the point where, the following summer as an intern, I was needing to write a tree walk, and I wrote it in FORTRAN — because that's what was available at AT&T Basking Ridge (long story) — using fake recursion (local vars get dimensions as arrays, every call/return becomes a computed goto, you get the idea…) because I wanted to see if this *could* actually be done in FORTRAN, and it could, and it worked, and there was much rejoicing; I think my supervisor (who, to be fair, was not really a programmer) blue-screened on that one.
And *then* I tried to explain it all to my dad...
8/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
You may say that untyped lambda calculus and SKI combinatory calculus and so on) are all *dreadful* in terms of expressiveness, and I will probably agree,
... but it also seems to me that Barendregt got pretty good at it.
I'm also guessing TECO wouldn't have existed without there being people who managed to wrap their brains around it and found it to be expressive and concise. I myself never got there (also never really tried TBH),
... but at the same time, it's *still* the case that if I need to write a one-liner to do something, chances are, I'll be doing it in Perl, and I've heard people complain about *that* language being essentially write-only line-noise.
10/11
-
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
You may say that untyped lambda calculus and SKI combinatory calculus and so on) are all *dreadful* in terms of expressiveness, and I will probably agree,
... but it also seems to me that Barendregt got pretty good at it.
I'm also guessing TECO wouldn't have existed without there being people who managed to wrap their brains around it and found it to be expressive and concise. I myself never got there (also never really tried TBH),
... but at the same time, it's *still* the case that if I need to write a one-liner to do something, chances are, I'll be doing it in Perl, and I've heard people complain about *that* language being essentially write-only line-noise.
10/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
To be sure, my Perl tends to be more structured.
On the other hand, I also hate Moose (Perl's attempt at CLOS) and have thus far succeeded in keeping that out of my life.
I also remember there being a time in my life when I could read and understand APL.
But if you do think it's possible to come up with some kind of useful formal definition/criterion for "expressiveness", go for it.
I'll believe it when I see it.
11/11
-
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
To be sure, my Perl tends to be more structured.
On the other hand, I also hate Moose (Perl's attempt at CLOS) and have thus far succeeded in keeping that out of my life.
I also remember there being a time in my life when I could read and understand APL.
But if you do think it's possible to come up with some kind of useful formal definition/criterion for "expressiveness", go for it.
I'll believe it when I see it.
11/11
@wrog
Thank-you for the wonderful autobiography!
@dougmerritt @kentpitman @ramin_hal9001 @cdegroot -
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
To be sure, my Perl tends to be more structured.
On the other hand, I also hate Moose (Perl's attempt at CLOS) and have thus far succeeded in keeping that out of my life.
I also remember there being a time in my life when I could read and understand APL.
But if you do think it's possible to come up with some kind of useful formal definition/criterion for "expressiveness", go for it.
I'll believe it when I see it.
11/11
@wrog@mastodon.murkworks.net your story about learning recursion in Algol reminded me of a story that was told about how Edsger Dijkstra influenced the Algol spec (through a personal conversation with I think John Bakus) to include what we now understand as a "function call," by specifying the calling convention for how the stack must be modified before the subroutine call and after the return. I first heard the story in a YouTube video called, "How the stack got stacked".
Regarding "expressiveness," you do make a good point about it (possibly) being fundamentally a subjective thing, like "intelligence." Personally, I never felt the restrictions in Haskell made it any less expressive as a language.
It is interesting how you can express some incredibly complex algorithms with very few characters in APL. Reducing function names to individual symbols and applied as operators does make the language much more concise, but is "concise" a necessary condition for "expressive?"
@dougmerritt@mathstodon.xyz @kentpitman@climatejustice.social @screwlisp@gamerplus.org @cdegroot@mstdn.ca
-
@wrog@mastodon.murkworks.net your story about learning recursion in Algol reminded me of a story that was told about how Edsger Dijkstra influenced the Algol spec (through a personal conversation with I think John Bakus) to include what we now understand as a "function call," by specifying the calling convention for how the stack must be modified before the subroutine call and after the return. I first heard the story in a YouTube video called, "How the stack got stacked".
Regarding "expressiveness," you do make a good point about it (possibly) being fundamentally a subjective thing, like "intelligence." Personally, I never felt the restrictions in Haskell made it any less expressive as a language.
It is interesting how you can express some incredibly complex algorithms with very few characters in APL. Reducing function names to individual symbols and applied as operators does make the language much more concise, but is "concise" a necessary condition for "expressive?"
@dougmerritt@mathstodon.xyz @kentpitman@climatejustice.social @screwlisp@gamerplus.org @cdegroot@mstdn.ca
@ramin_hal9001
Some clearly prefer concise, but nonetheless it is orthogonal to expressive.'Expressive' != 'my favorite approach' -- ideally expressiveness can be determined objectively by human factors studies.
Failing that, sure, it's then subjective and subject to unbounded argument.

-
@wrog
Thank-you for the wonderful autobiography!
@dougmerritt @kentpitman @ramin_hal9001 @cdegroot@wrog
Yes, thanks, I liked reading that. -
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
You may say that untyped lambda calculus and SKI combinatory calculus and so on) are all *dreadful* in terms of expressiveness, and I will probably agree,
... but it also seems to me that Barendregt got pretty good at it.
I'm also guessing TECO wouldn't have existed without there being people who managed to wrap their brains around it and found it to be expressive and concise. I myself never got there (also never really tried TBH),
... but at the same time, it's *still* the case that if I need to write a one-liner to do something, chances are, I'll be doing it in Perl, and I've heard people complain about *that* language being essentially write-only line-noise.
10/11
@wrog
> I'm also guessing TECO wouldn't have existed without there being people who managed to wrap their brains around it and found it to be expressive and concise. I myself never got there (also never really tried TBH),I'm one of those people, BTW. My proof is that I wrote a closed-loop stick figure ASCII animation juggling three balls.
As with any complex TECO thing, the resulting code was write-only -- and that was always the problem with even mildly powerful TECO macros.
Perl at its worst can be described as write-only line noise, yes, but in my experience is *STILL* better than TECO!
I am indeed fortunate to be able to stick with Emacs and Vi.
-
@wrog
> I'm also guessing TECO wouldn't have existed without there being people who managed to wrap their brains around it and found it to be expressive and concise. I myself never got there (also never really tried TBH),I'm one of those people, BTW. My proof is that I wrote a closed-loop stick figure ASCII animation juggling three balls.
As with any complex TECO thing, the resulting code was write-only -- and that was always the problem with even mildly powerful TECO macros.
Perl at its worst can be described as write-only line noise, yes, but in my experience is *STILL* better than TECO!
I am indeed fortunate to be able to stick with Emacs and Vi.
@dougmerritt @wrog @ramin_hal9001 @screwlisp @cdegroot
TECO was a necessary innovation under word-addressed memory. With 36 bits per word, you couldn't afford that much space for an instruction. 5 7-bit bytes (with a bit left over) 8n one word was a lot more compact than an assembly instruction. With only 256 KW (kilowords) total addressable in 18 bits, you had to get all the power packed in you could. And we didn't have WYSIWYG yet, and most computer people couldn't type. So it would make a lot more sense to you if you were doing hunt and peck with almost no visibility into what you're changing. Typing -3cifoo$$ to mean go back three characters and insert foo and show me what the few characters around my cursor look like was extremely natural in context. That it became a programming language was a natural extension of that so that you didn't have to keep typing the same things over and over again.
-
@dougmerritt @wrog @ramin_hal9001 @screwlisp @cdegroot
TECO was a necessary innovation under word-addressed memory. With 36 bits per word, you couldn't afford that much space for an instruction. 5 7-bit bytes (with a bit left over) 8n one word was a lot more compact than an assembly instruction. With only 256 KW (kilowords) total addressable in 18 bits, you had to get all the power packed in you could. And we didn't have WYSIWYG yet, and most computer people couldn't type. So it would make a lot more sense to you if you were doing hunt and peck with almost no visibility into what you're changing. Typing -3cifoo$$ to mean go back three characters and insert foo and show me what the few characters around my cursor look like was extremely natural in context. That it became a programming language was a natural extension of that so that you didn't have to keep typing the same things over and over again.
@dougmerritt @wrog @ramin_hal9001 @screwlisp @cdegroot
In effect, a Q register, what passed for storage in TECO, was something you can name in one bite. So 1,2mA meaning call what's in A with args 1 and 2 was a high-level language function call with two arguments that fit into a single machine word. Even the PDP-10 pushj instruction, which was pretty sophisticated as a way of calling a function, couldn't pass arguments with that degree of compactness.