So I'm having a "This is why we still use Fortran" moment today.
-
@arclight That said: I have that same book but haven’t worked thru it yet; I’m also only vaguely familiar with diffeq so definitely don’t have an informed opinion there. But it makes sense that Scheme/Lisp are uncomfortable fits out of the box for your problem space of expertise.
I say “out of the box” because IMO the strength of Lisps generally is the ability to mold it into the shape of your problem, but building up the kinds of affordances Fortran has is a big task. 2/ @mdhughes
@arclight Still on my first coffee so that may make less sense than it seems to me right now, but it’s hopefully still useful.
There *are* folks doing numeric solutions in Lisps, and have been for decades, but the most natural fit is in analytic solutions and symbolic computing. (FWIW Common Lisp has a whole other syntax for that shape of problem: LOOP. Scheme is designed with a much “purer” syntax, to be cleaner and more theoretically consistent.)
Back to my coffeelurk.
3/3 -
@AlgoCompSynth I tried Haskell and within 30 minutes had hard-locked my desktop and needed to power-cycle it to get it back. Hadn't had that happen in decades. I looked at Julia; it's designed for research code with odd design choices plus this breathless fascination with multiple dispatch. Didn't seem worth pursuing.
I'm still having a big problem finding anything but C++ and Modern Fortran for writing production code. Ada was too hard to get traction with and it's more intended for embedded systems rather than desktops and servers. Everything else is single source, the implemtation is the spec. Great until the maintainers decide that slop PRs are acceptable and you're chained to that sinking ship.

@arclight @AlgoCompSynth The secret to Julia is it’s essentially a Lisp with infix syntax. Multiple dispatch is how they solve the problem of fast numerical solutions being extremely sensitive to types, which is probably the only way you can sensibly do it, but it has taken a lot of work to get it to where it is now (which AIUI is much better than it was even a year or two ago).
-
@nyrath APL is simultaneously genius and batshit. It's incredible what you can do with 2-3 sigils but it is cryptic as hell. It the sort of language used by people who talk to crows.
When I was in college (1980s), we wrote our source code on DECwriters. Which could not handle the APL character set.
If you wanted to write APL you had to go to the computer lab's lonely IBM Selectric, and use the lab's only APL type ball.

-
And mind you, this isn't a He-Man Lisp Hater's rant. It's more of a mope that I feel I'm wasting my time, like William from Mallrats who camps out in front of the random-dot-stereogram all day but can never see the sailboat. I just don't see it and I can't tell if I'm looking at it wrong or if there's just nothing there. I assume the former which has put me down this particular rabbit hole. At some point I need to stop digging.
@arclight have you ever looked at the ML family of languages?
-
R relay@relay.infosec.exchange shared this topic
-
@arclight @AlgoCompSynth The secret to Julia is it’s essentially a Lisp with infix syntax. Multiple dispatch is how they solve the problem of fast numerical solutions being extremely sensitive to types, which is probably the only way you can sensibly do it, but it has taken a lot of work to get it to where it is now (which AIUI is much better than it was even a year or two ago).
-
@arclight have you ever looked at the ML family of languages?
@flyingsaceur No, not yet, but ML is definitely on my list! I really enjoyed Greg Michelson's book on the lambda calculus which (eventually) used ML as its reference language https://www.cs.rochester.edu/~brown/173/readings/LCBook.pdf
That book was such an eye opener, the way it built increasingly more useful and familiar constructs from the barest essentials. You understand the fundamental theoretical underpinning and the language details are wedged way in the back.
-
@flyingsaceur No, not yet, but ML is definitely on my list! I really enjoyed Greg Michelson's book on the lambda calculus which (eventually) used ML as its reference language https://www.cs.rochester.edu/~brown/173/readings/LCBook.pdf
That book was such an eye opener, the way it built increasingly more useful and familiar constructs from the barest essentials. You understand the fundamental theoretical underpinning and the language details are wedged way in the back.
@arclight I see you are a man of wealth and taste: that is one of the four books I have in my “antilibrary” on Standard ML; the capstone being Appel’s SML edition of the “Tiger Book” viz. ‘Modern Compiler Implementation in ML’
-
@arclight @AlgoCompSynth The secret to Julia is it’s essentially a Lisp with infix syntax. Multiple dispatch is how they solve the problem of fast numerical solutions being extremely sensitive to types, which is probably the only way you can sensibly do it, but it has taken a lot of work to get it to where it is now (which AIUI is much better than it was even a year or two ago).
@arclight @AlgoCompSynth Speaking of different syntaxes… Coalton is super interesting. Sort of a Lisp-ML-Fortran hybrid.
A Preview of Coalton 0.2
By Robert Smith Coalton is a statically typed functional programming language that lives inside Common Lisp. It has had an exciting few years. It is being used for industrial purposes, being put to its limits as a production language to build good, reliable, efficient, and robust products. Happily, with Coalton, many products shipped with tremendous success. But as we built these products, we noticed gaps in the language. As such, we’re setting the stage for the next tranche of Coalton work, and we’re going to preview some of these improvements here, including how Coalton can prove $\sqrt{2+\sqrt{3}} = \sqrt{2}(\sqrt{3}+1)/2$ exactly.
The Coalton Programming Language (coalton-lang.github.io)
The section in this post on “Real algebraic numbers and xmath” in particular seems like it might activate some of your numerics neuroreceptors.
-
That apparently translates to 8 lines of impenetrable Scheme:
@arclight as someone who can read scheme reasonably well the only way this is workable is within a repl oriented environment like Dr Racket or emacs. Might be just me though.
-
And mind you, this isn't a He-Man Lisp Hater's rant. It's more of a mope that I feel I'm wasting my time, like William from Mallrats who camps out in front of the random-dot-stereogram all day but can never see the sailboat. I just don't see it and I can't tell if I'm looking at it wrong or if there's just nothing there. I assume the former which has put me down this particular rabbit hole. At some point I need to stop digging.
I think @jannem identified a key pain point related to expected programmer usage - a very specific type or structure of program that languages like Scheme and Forth excel at. @peterb summed it up when quoting this thread - "batteries not included".
Implementing this example in Forth would probably make this issue clearer. You're not so much programming but building a DSL from the ground up. There's an excruciatingly small - powerful but spartan - set of commands available that are mostly limited to building other commands. The expectation is that your problem lends itself to being decomposed into a set of increasingly detailed expressions. All the comforts of intermediate state or structures beyond expressions are missing. This might be great for the interpreter but it's hell on human parsing and interpretation, diagnostics, and efficiency.
There's a further unstated expectation with Scheme that your problem can be reduced to simple transaction processing. Go process this arbitrarily long list or sequence of identical simple things or generate a sequence starting with a single simple element.
The expectation of simplicity and uniformity, of an elegant problem, does not lend itself to most forms of engineering analysis. Here's an example of what I deal with on a regular basis: calculate the diffusion coefficient for use in a higher-level function which models aerosol behavior. This is taken from DOE HD-10216, Vol. VIII "Modifications for the Development of the MAAP-DOE Code Volume VIII: Resolution of the Outstanding Nuclear Fission Product Aerosol Transport and Deposition issues WBS 3.4.2" if you want the full context.
-
I think @jannem identified a key pain point related to expected programmer usage - a very specific type or structure of program that languages like Scheme and Forth excel at. @peterb summed it up when quoting this thread - "batteries not included".
Implementing this example in Forth would probably make this issue clearer. You're not so much programming but building a DSL from the ground up. There's an excruciatingly small - powerful but spartan - set of commands available that are mostly limited to building other commands. The expectation is that your problem lends itself to being decomposed into a set of increasingly detailed expressions. All the comforts of intermediate state or structures beyond expressions are missing. This might be great for the interpreter but it's hell on human parsing and interpretation, diagnostics, and efficiency.
There's a further unstated expectation with Scheme that your problem can be reduced to simple transaction processing. Go process this arbitrarily long list or sequence of identical simple things or generate a sequence starting with a single simple element.
The expectation of simplicity and uniformity, of an elegant problem, does not lend itself to most forms of engineering analysis. Here's an example of what I deal with on a regular basis: calculate the diffusion coefficient for use in a higher-level function which models aerosol behavior. This is taken from DOE HD-10216, Vol. VIII "Modifications for the Development of the MAAP-DOE Code Volume VIII: Resolution of the Outstanding Nuclear Fission Product Aerosol Transport and Deposition issues WBS 3.4.2" if you want the full context.
What I want you to note in this notation is that in most cases the full argument list for each expression is not written. There are a few key variables (radius/volume, temperature) and a whole bunch of ancillary parameters (material properties, physical constants, system conditions). The latter are omitted out of clarity but in code implementation every ancillary parameter is has to be a function argument. Modules let you import parameters and functions directly from within a function but that's compile-time syntactic sugar; get rid of modules and you can move all that baggage into the function interface. I'll leave it as an exercise to write out D(r) with the full list of required arguments and parameters.
And this is a _simple_ example - it's just a long sequence of tedious arithmetic. No condition statements, decision logic, iterative solution, just a big tedious pile of algebra reduceable to a single function call.
The only way we make sense of these pro arms as engineers is by introducing intermediate quantities as abstractions and ignoring the details we are not focused on.
The object-oriented approach helps a huge amount by allowing elements to be composed of other elements which hides detail while still keeping it accessible. A factory function that populated an object capable of calculating this diffusion coefficient would suffer the same fate of needing the same mile-long argument list as the straight Scheme expression. It's an essential complexity of these sorts of problems but it's treated like the constant term in algorithmic complexity (O[n]) analysis - it's not interesting so it's ignored. Which is great until your O[n log n] algorithm gets smoked by an O[n²] algorithm because the constant terms dominate for the actual value of n for your application.
The diffusion coefficient calculation situation occurs far more frequently in practice than Harper & Sussman's ODE situation. It's tedious and uninteresting from their perspective - they're trying to show the flexibility of (history t n) in an environment without the affordance of indexed (random access) lists so the constant terms aren't their focus. It's fair to do that but it's left for the reader to recognize **they aren't solving the whole problem.**
As an engineering analysis application developer, I am responsible for identifying and solving the whole problem and for communicating this solution to people who may lack either the subject matter background (software developers) or software engineering background (engineering analysts). Notation - how we name things - is incredibly important. So while I'm not faulting Harper & Sussman for focusing on the central point of their book, it excruciatingly difficult to put their ideas into practice in real engineering applications because of their insistence on using a batteries-not-included implementation language. It's minimal and elegant (in concept at least) but their Scheme syntax maps poorly to the underlying subject matter notation using human eyes on print media. It's made easier with a good IDE but that's now a hidden external dependency for the practitioner. If I want to effectively make sense of their work, the IDE requirement is a serious barrier.
You might call me out for being a troglodyte, being wed to dead-tree media. I'll counter with the dead-tree media being perpetually accessible and immutable. It may accumulate errata and obsolescence but I'm guaranteed what I read from thd printed pagd is deterministic and repeatable. Requiring the subject to be explained or understood via an IDE makes it ephemeral. This is off in the weeds but we really don't want the medium to be the message here.
-
What I want you to note in this notation is that in most cases the full argument list for each expression is not written. There are a few key variables (radius/volume, temperature) and a whole bunch of ancillary parameters (material properties, physical constants, system conditions). The latter are omitted out of clarity but in code implementation every ancillary parameter is has to be a function argument. Modules let you import parameters and functions directly from within a function but that's compile-time syntactic sugar; get rid of modules and you can move all that baggage into the function interface. I'll leave it as an exercise to write out D(r) with the full list of required arguments and parameters.
And this is a _simple_ example - it's just a long sequence of tedious arithmetic. No condition statements, decision logic, iterative solution, just a big tedious pile of algebra reduceable to a single function call.
The only way we make sense of these pro arms as engineers is by introducing intermediate quantities as abstractions and ignoring the details we are not focused on.
The object-oriented approach helps a huge amount by allowing elements to be composed of other elements which hides detail while still keeping it accessible. A factory function that populated an object capable of calculating this diffusion coefficient would suffer the same fate of needing the same mile-long argument list as the straight Scheme expression. It's an essential complexity of these sorts of problems but it's treated like the constant term in algorithmic complexity (O[n]) analysis - it's not interesting so it's ignored. Which is great until your O[n log n] algorithm gets smoked by an O[n²] algorithm because the constant terms dominate for the actual value of n for your application.
The diffusion coefficient calculation situation occurs far more frequently in practice than Harper & Sussman's ODE situation. It's tedious and uninteresting from their perspective - they're trying to show the flexibility of (history t n) in an environment without the affordance of indexed (random access) lists so the constant terms aren't their focus. It's fair to do that but it's left for the reader to recognize **they aren't solving the whole problem.**
As an engineering analysis application developer, I am responsible for identifying and solving the whole problem and for communicating this solution to people who may lack either the subject matter background (software developers) or software engineering background (engineering analysts). Notation - how we name things - is incredibly important. So while I'm not faulting Harper & Sussman for focusing on the central point of their book, it excruciatingly difficult to put their ideas into practice in real engineering applications because of their insistence on using a batteries-not-included implementation language. It's minimal and elegant (in concept at least) but their Scheme syntax maps poorly to the underlying subject matter notation using human eyes on print media. It's made easier with a good IDE but that's now a hidden external dependency for the practitioner. If I want to effectively make sense of their work, the IDE requirement is a serious barrier.
You might call me out for being a troglodyte, being wed to dead-tree media. I'll counter with the dead-tree media being perpetually accessible and immutable. It may accumulate errata and obsolescence but I'm guaranteed what I read from thd printed pagd is deterministic and repeatable. Requiring the subject to be explained or understood via an IDE makes it ephemeral. This is off in the weeds but we really don't want the medium to be the message here.
@arclight Nah I agree with you. I think the Object model has its own share of problems, but yeah, this is the same reason i don't like most Lisp, at least as an human facing interface.
It is basically the machine language with no sugar, and even then it is limited to an old and reductive model.
-
What I want you to note in this notation is that in most cases the full argument list for each expression is not written. There are a few key variables (radius/volume, temperature) and a whole bunch of ancillary parameters (material properties, physical constants, system conditions). The latter are omitted out of clarity but in code implementation every ancillary parameter is has to be a function argument. Modules let you import parameters and functions directly from within a function but that's compile-time syntactic sugar; get rid of modules and you can move all that baggage into the function interface. I'll leave it as an exercise to write out D(r) with the full list of required arguments and parameters.
And this is a _simple_ example - it's just a long sequence of tedious arithmetic. No condition statements, decision logic, iterative solution, just a big tedious pile of algebra reduceable to a single function call.
The only way we make sense of these pro arms as engineers is by introducing intermediate quantities as abstractions and ignoring the details we are not focused on.
The object-oriented approach helps a huge amount by allowing elements to be composed of other elements which hides detail while still keeping it accessible. A factory function that populated an object capable of calculating this diffusion coefficient would suffer the same fate of needing the same mile-long argument list as the straight Scheme expression. It's an essential complexity of these sorts of problems but it's treated like the constant term in algorithmic complexity (O[n]) analysis - it's not interesting so it's ignored. Which is great until your O[n log n] algorithm gets smoked by an O[n²] algorithm because the constant terms dominate for the actual value of n for your application.
The diffusion coefficient calculation situation occurs far more frequently in practice than Harper & Sussman's ODE situation. It's tedious and uninteresting from their perspective - they're trying to show the flexibility of (history t n) in an environment without the affordance of indexed (random access) lists so the constant terms aren't their focus. It's fair to do that but it's left for the reader to recognize **they aren't solving the whole problem.**
As an engineering analysis application developer, I am responsible for identifying and solving the whole problem and for communicating this solution to people who may lack either the subject matter background (software developers) or software engineering background (engineering analysts). Notation - how we name things - is incredibly important. So while I'm not faulting Harper & Sussman for focusing on the central point of their book, it excruciatingly difficult to put their ideas into practice in real engineering applications because of their insistence on using a batteries-not-included implementation language. It's minimal and elegant (in concept at least) but their Scheme syntax maps poorly to the underlying subject matter notation using human eyes on print media. It's made easier with a good IDE but that's now a hidden external dependency for the practitioner. If I want to effectively make sense of their work, the IDE requirement is a serious barrier.
You might call me out for being a troglodyte, being wed to dead-tree media. I'll counter with the dead-tree media being perpetually accessible and immutable. It may accumulate errata and obsolescence but I'm guaranteed what I read from thd printed pagd is deterministic and repeatable. Requiring the subject to be explained or understood via an IDE makes it ephemeral. This is off in the weeds but we really don't want the medium to be the message here.
My conclusion for the moment is that the functional approach is extremely limited in its application for the sort of software I deal with. Conceptually the notions of immutability and lack of side effects are valuable. Recursion has efficiency issues and subtle but catastrophic failure modes compared to known-finite iteration or array operations. Much of the problem seems to be bound up in notation and the expectated solution form of specific functions languages. The notational issue is obvious but the expectations of uniformity, simplicity, and elegance of problem are not reasonable for even simple (but tedious) non-repetitive calculation. There are good concepts to understand here but practical applicability is extremely limited because of the nature of physical modeling and the extensive need for varied and structured data.
I appreciate Harper & Sussman's efforts but ultimately, I think the problem is a lack of detailed and realistic case studies of actual physical analysis. The points they are trying to get across would be quickly mired in tedious calculation and coding. That should be a warning that their specific approach may not be able to address simpler common issues. Maybe those are addressed in a different book, maybe in my quick scan of their book I skipped over where they acknowledge these issues or point to a broader reference, I don't know because I jumped to a quick recognizable example. I'm not trying to blame them for not addressing an issue that they didn't set out to address. I'm just noting my frustration with finding any example of a purely functional approach using a common language to solving broadly data-heavy problems from a straightforward domain-specific problem statement. It's a lot of work to do even a small example and nobody wants to put in the effort to document the process of solving a problem with the wrong tool. The question is how do you know a tool is the wrong tool if you don't actually explore how the tool fails in practice and identify the characteristics of problems that are a bad fit for a language (e.g. hardware interfacing, text processing, expression evaluation in Fortran).
This isn't a complaint about Scheme. The language is what it is. It shouldn't be a surprise given that it descends from Lisp, the other long-lived language originally designed for the IBM 704 in the mid-1950s. It's simply not designed to solve problems with a broad set of unique (non-sequential) data with a moderately high degree of coupling. But as far as I can tell, nobody actually says that in a single coherent sentence. It's always vaguely alluded to but never spelled out. A lot I think is the myopia of the language community or instructors in not understanding the breadth of applications - the classes of problems actually solved by people who are outside computer science and how/which languages get used. There's virtually no feedback from engineering analysis to computer science the same way say web applications or databases or embedded firmware or search or text processing feeds back. CS hits floating point numerics and calls it a day, not looking at the actual applications and classes of problems that use those numerics.
-
My conclusion for the moment is that the functional approach is extremely limited in its application for the sort of software I deal with. Conceptually the notions of immutability and lack of side effects are valuable. Recursion has efficiency issues and subtle but catastrophic failure modes compared to known-finite iteration or array operations. Much of the problem seems to be bound up in notation and the expectated solution form of specific functions languages. The notational issue is obvious but the expectations of uniformity, simplicity, and elegance of problem are not reasonable for even simple (but tedious) non-repetitive calculation. There are good concepts to understand here but practical applicability is extremely limited because of the nature of physical modeling and the extensive need for varied and structured data.
I appreciate Harper & Sussman's efforts but ultimately, I think the problem is a lack of detailed and realistic case studies of actual physical analysis. The points they are trying to get across would be quickly mired in tedious calculation and coding. That should be a warning that their specific approach may not be able to address simpler common issues. Maybe those are addressed in a different book, maybe in my quick scan of their book I skipped over where they acknowledge these issues or point to a broader reference, I don't know because I jumped to a quick recognizable example. I'm not trying to blame them for not addressing an issue that they didn't set out to address. I'm just noting my frustration with finding any example of a purely functional approach using a common language to solving broadly data-heavy problems from a straightforward domain-specific problem statement. It's a lot of work to do even a small example and nobody wants to put in the effort to document the process of solving a problem with the wrong tool. The question is how do you know a tool is the wrong tool if you don't actually explore how the tool fails in practice and identify the characteristics of problems that are a bad fit for a language (e.g. hardware interfacing, text processing, expression evaluation in Fortran).
This isn't a complaint about Scheme. The language is what it is. It shouldn't be a surprise given that it descends from Lisp, the other long-lived language originally designed for the IBM 704 in the mid-1950s. It's simply not designed to solve problems with a broad set of unique (non-sequential) data with a moderately high degree of coupling. But as far as I can tell, nobody actually says that in a single coherent sentence. It's always vaguely alluded to but never spelled out. A lot I think is the myopia of the language community or instructors in not understanding the breadth of applications - the classes of problems actually solved by people who are outside computer science and how/which languages get used. There's virtually no feedback from engineering analysis to computer science the same way say web applications or databases or embedded firmware or search or text processing feeds back. CS hits floating point numerics and calls it a day, not looking at the actual applications and classes of problems that use those numerics.
@arclight I think Lisp isn't a good tool for that sort of problem, really, unless you involve some DSL.
Where it shines, in my opinion, is for problems shaped like slinging around tree-shaped data structures. Not all problems can be hammered into that shape, but many ones I like puttering with can with little effort.

-
What I want you to note in this notation is that in most cases the full argument list for each expression is not written. There are a few key variables (radius/volume, temperature) and a whole bunch of ancillary parameters (material properties, physical constants, system conditions). The latter are omitted out of clarity but in code implementation every ancillary parameter is has to be a function argument. Modules let you import parameters and functions directly from within a function but that's compile-time syntactic sugar; get rid of modules and you can move all that baggage into the function interface. I'll leave it as an exercise to write out D(r) with the full list of required arguments and parameters.
And this is a _simple_ example - it's just a long sequence of tedious arithmetic. No condition statements, decision logic, iterative solution, just a big tedious pile of algebra reduceable to a single function call.
The only way we make sense of these pro arms as engineers is by introducing intermediate quantities as abstractions and ignoring the details we are not focused on.
The object-oriented approach helps a huge amount by allowing elements to be composed of other elements which hides detail while still keeping it accessible. A factory function that populated an object capable of calculating this diffusion coefficient would suffer the same fate of needing the same mile-long argument list as the straight Scheme expression. It's an essential complexity of these sorts of problems but it's treated like the constant term in algorithmic complexity (O[n]) analysis - it's not interesting so it's ignored. Which is great until your O[n log n] algorithm gets smoked by an O[n²] algorithm because the constant terms dominate for the actual value of n for your application.
The diffusion coefficient calculation situation occurs far more frequently in practice than Harper & Sussman's ODE situation. It's tedious and uninteresting from their perspective - they're trying to show the flexibility of (history t n) in an environment without the affordance of indexed (random access) lists so the constant terms aren't their focus. It's fair to do that but it's left for the reader to recognize **they aren't solving the whole problem.**
As an engineering analysis application developer, I am responsible for identifying and solving the whole problem and for communicating this solution to people who may lack either the subject matter background (software developers) or software engineering background (engineering analysts). Notation - how we name things - is incredibly important. So while I'm not faulting Harper & Sussman for focusing on the central point of their book, it excruciatingly difficult to put their ideas into practice in real engineering applications because of their insistence on using a batteries-not-included implementation language. It's minimal and elegant (in concept at least) but their Scheme syntax maps poorly to the underlying subject matter notation using human eyes on print media. It's made easier with a good IDE but that's now a hidden external dependency for the practitioner. If I want to effectively make sense of their work, the IDE requirement is a serious barrier.
You might call me out for being a troglodyte, being wed to dead-tree media. I'll counter with the dead-tree media being perpetually accessible and immutable. It may accumulate errata and obsolescence but I'm guaranteed what I read from thd printed pagd is deterministic and repeatable. Requiring the subject to be explained or understood via an IDE makes it ephemeral. This is off in the weeds but we really don't want the medium to be the message here.
@arclight There are, of course, objects in Scheme, just like in Common Lisp. Typically we start with records, which are pretty primitive. CLOS and Scheme class systems (there's many many variants, everyone's written one or more) are more useful at data collecting/hiding. let-over-lambda objects are the Schemiest and most flexible, but harder to build big structures of.
Nobody uses an IDE for Scheme/CL. Some of us use just REPL & vi (or ex, in one case I know), others use emacs & SLIME.
#scheme -
@arclight I think Lisp isn't a good tool for that sort of problem, really, unless you involve some DSL.
Where it shines, in my opinion, is for problems shaped like slinging around tree-shaped data structures. Not all problems can be hammered into that shape, but many ones I like puttering with can with little effort.

@datarama Exactly! Which is one reason I find the incessant cramming of unavoidable functional features into general purpose languages (say Python) really annoying because they make my life difficult. They keep bending the language away from certain classes of problems for what I can only see as fashion or whim.
-
@AlgoCompSynth I tried Haskell and within 30 minutes had hard-locked my desktop and needed to power-cycle it to get it back. Hadn't had that happen in decades. I looked at Julia; it's designed for research code with odd design choices plus this breathless fascination with multiple dispatch. Didn't seem worth pursuing.
I'm still having a big problem finding anything but C++ and Modern Fortran for writing production code. Ada was too hard to get traction with and it's more intended for embedded systems rather than desktops and servers. Everything else is single source, the implemtation is the spec. Great until the maintainers decide that slop PRs are acceptable and you're chained to that sinking ship.

@arclight @AlgoCompSynth
Maybe Go or possibly Rust might be possible alternatives for you.But. What you're looking for is an imperative language for numerical computing. And modern Fortran is right there. I honestly don't think you'll find anything that can really improve a lot on that.
C++ has, to me, sort of the same issue as Scheme. A language committee keeps creating a new, incomprehensible language every few years and insists it should still be called C++.
-
@arclight @AlgoCompSynth
Maybe Go or possibly Rust might be possible alternatives for you.But. What you're looking for is an imperative language for numerical computing. And modern Fortran is right there. I honestly don't think you'll find anything that can really improve a lot on that.
C++ has, to me, sort of the same issue as Scheme. A language committee keeps creating a new, incomprehensible language every few years and insists it should still be called C++.
-
@AlgoCompSynth @arclight
I believe a major concern here is software longevity. You need to be able to grab the source in 15-20 years and it should verifiably still just work, and in the same way it used to.R, for all its good points, is not meant for that. Similar issues would rule out Python, Perl, Ruby, JS and so on.
-
@AlgoCompSynth @arclight
I believe a major concern here is software longevity. You need to be able to grab the source in 15-20 years and it should verifiably still just work, and in the same way it used to.R, for all its good points, is not meant for that. Similar issues would rule out Python, Perl, Ruby, JS and so on.

