As far as prior work goes, I think this topic is the last time "can't rebind t" came up: http://arclanguage.org/item?id=13080 It's mainly just me linking back to those other two pages, but it's also a pretty good summary of my own opinion of the issues involved.
In my runtime project, nil and t are ordinary global variables... if someone turns out to want the rebind protection feature, I'll add it as a compiler extension (which other people could then choose to apply, or not, as they wished).
What I would do is make it print a warning, but still allow it. Something like, "hey, you! it's a bad idea to rebind nil; you'll probably break everything! use (= nil ()) to fix the mess you probably made"
Printing a warning is a good idea. Whether rebinding nil could be fixed with "(= nil ())" is an interesting question, you might (or might not, I haven't tried it) find that rebinding nil breaks Arc so badly that = no longer works... :-)
I see a potential for a contest here: how to permanently break Arc (with "permanently" meaning you can't recover by typing something at the REPL), in the most interesting way, using the fewest characters. (Non-interesting being, for example, going into an infinite loop so that you don't return to the REPL).
Quite possibly. It should work in my interpreter, though. Actually, () and nil are two different values, but they're special-cased to be eq to each other. It was the only way I found to make one print as "nil" and the other print as "()" From an external point of view, though, they should seem the same.
Pressing <Enter> sends your command line "(readline)\n" to Racket's reader, but Racket's reader only reads enough of the input such as "(readline)" to read a complete expression.
arc> (readc);
#\;
arc> (readline);hello there
";hello there"
A reasonable fix would be for the REPL loop, when it knows its input is coming from the terminal and so won't see the final line of input unless you've pressed enter, to do its own "(readline)" before evaluating the expression, and so to consume the excess input.
Interesting, and not entirely wanted, I think. Is there a way to prompt for a new character? I don't see a way to clear out a port; peekc waits for a character to peek at; readall waits until you type nil.
Scheme's 'char-ready? is probably close to what you're looking for. It returns #f if the runtime can guarantee that the stream would have blocked if it were read from at that time, and otherwise it returns #t.
; This exploits an Arc 3.1 bug to drop to Racket (as seen at
; http://arclanguage.org/item?id=11838). It's unnecessary on Anarki.
(mac $ (racket-expr)
`(cdr `(nil . ,,racket-expr)))
(def discard-input ((o str (stdin)))
" Reads all the characters from the stream, giving up partway if it
can determine the stream would block. The return value is `nil'. "
(while:and $.char-ready?.str readc.str))
arc>
(do (pr "enter something> ")
(discard-input)
(prn "The first character was " (tostring:write:readc) ".")
(discard-input))
enter something> something
The first character was #\s.
nil
arc>
Yes. If you're signed in as a user with write access, you will see buttons "Create Page", "Edit Page", and "More actions", and under more actions is "Revision History".
If you email me at andrew.wilcox@gmail.com, I'll add you as a user with write access :-)
Ah, I found a setting which enables viewing the revision history even for people who aren't logged in. In the page footer there's now a "Revision History" link.
There is a small difference: if you've loaded only the code up to the point of the definition which is being tested when you run the test (either by writing tests in the same source code file as the definitions, or by using some clever test infrastructure), then you prove that your definitions aren't using anything defined later.
Of course you can probably tell whether code is in prerequisite order just by looking at it, so this may not add much value.
Is there a general interest in moving ssyntax functionality to the reader?
In the Arc runtime project, that was my assumption behind my choosing my matching library to implement the reader in Arc. The matching library is way more powerful than what would be needed to simply replace the Racket reader as-is; the goal is that when people want to experiment with different kinds of syntaxes or with extending ssyntax to work in more cases it will be easy to do.
Something I've been thinking about, though I haven't implemented anything yet, is that there's code, and then there's things related to that code such as prerequisites, documents, examples, tests, etc. The usual practice is to stick everything into the source code file: i.e., we start off with some require's or import's to list the prerequisites, doc strings inline with the function definition, and, in my case, tests following the definition because I wanted the tests to run immediately after the definition.
But perhaps it would be better to be able to have things in separate files. I could have a file of tests, and the tests for my definition of "foo" would be marked as tests for "foo".
Then, for example, if I happened to want to run my tests in strict dependency order, I could load my code up to and including my definition of foo, and then run my tests for foo.
"the tests for my definition of foo would be marked as tests for foo."
In java or rails each class file gets a corresponding test file in a parallel directory tree. I find it too elaborate, but it does permit this sort of testing classes in isolation.
Not by design, as it happens. I wrote some new tests for code written in Arc, and stuck them into a separate file because I hadn't gotten around to implementing a mechanism to load Arc code without running the tests.
Though I do view writing dependencies-first as a form of scaffolding. You may need or want scaffolding for safety, or because you're working on a large project, or because you're in the midst of rebuilding.
Does that mean that you always need to use scaffolding when you work on a project? Of course not. If you're getting along fine without scaffolding, then you don't need to worry about it.
Nor, just because you might need scaffolding in the future, does it mean that you have to build it right now. For example, if I had some code that I wanted to rebase to work on top of a different library, and it wasn't in dependency order, and it looked like the rebasing work might be hard, I'd probably put my code into dependency order first to make the task either. But, if I thought the rebasing was going to be easy, I might not bother. If I ran into trouble, then perhaps I'd backtrack, build my scaffolding, and try again.
sometimes you might want the reference to "foo" to refer to the foo you are currently defining, or to the foo in the enclosing scope.
There are a couple ways to do this.
One way is that you can use different names for some-let to indicate which one you want.
For example, you could have "let" mean to use foo from the enclosing scope (which is what Arc does now), and have "letrec" mean to use the foo that you're defining.
Or, if you prefer, you could have "let" mean to use the foo that you're defining (and so would work like what we've been calling letrec), and make up another name ("let-parent" or something) for the other kind of let which works like Arc's let does now. You might prefer this if you use the former more often than the latter.
Choosing different names for let isn't the only way to distinguish between the two possibilities. You could modify the Arc compiler so that let worked the way it does now, except that you add a syntax to get at the foo being defined:
(let rec (fn (x) (this.rec x)))
or, if you wanted let to work like letrec by default instead, except that you needed some way of sometimes referring to the enclosing foo, you could create a syntax for that:
Yes, that's what I'm saying. I understand now that sometimes it is useful to pull in a variable from an outer scope, but sometimes you want the behavior of letrec. I'm pretty sure letrec would be trivial to write as a macro, so this discussion is more about which should be the default. Do people usually want the outer scope, or the inner?
P.S. I whipped up a quick version of letr; it seems to work okay:
P.P.S. Adding more syntax would also solve it, but I actually would prefer to give them two different names. It just seems cleaner and less cluttered to me.
1) afn is somewhat of an anomaly. When I see "fn" I know "this is a function" but when I see "afn" I have to pause for a split second to realize what it does.
2) Pattern recognition due to redundancy. When I glance at the first version, my eyes quickly notice that there are 3 references to "rec", and thus I immediately form a pattern in my mind. But in the second version, there are two references to the same function: self and rec. This adds an additional cognitive overhead for me.
In other words, the first one appears less cluttered to me because it has fewer unique elements, and because "fn" is more familiar to me than "afn".
It's not a huge deal, but I would like to reduce the cognitive overhead of reading and writing code as much as I can, which is one of the things I really like about Arc: it's clean, simple, and short, but allow tons of flexibility to change the language to suit the way you think.
The function might have some internal state, like variables or helper functions. I don't want those to clutter up the namespace, nor do I want them to be mucked up by other code.
I used to have top-level (let ...) forms here and there for exactly the reasons you described, but sooner or later I regretted not being able to view those values at the REPL during debugging. For instance, Lathe's 'multival-cache* [1] is an implementation detail for the multival framework (which I won't go into here), but it turns out to be really useful to inspect when the framework goes awry.
In any case, I welcome the ability for people to muck with my code, 'cause it's a good way to experiment with making it better. ^_^ Just because something's exposed doesn't mean it's part of a stable API, especially in Arc where things are assumed unstable by default.
Cluttering up the namespace is still an issue, but the obvious advice is to choose unique names. ;) If you're still very worried about it, like I am, I humbly suggest you look at Lathe's namespace system.
I haven't looked at LavaScript, but I'm curious as to why you wouldn't be able to support quasiquotation?
If you don't have quasiquotation yet, but you do have Arc's mac working, macros can be written without quasiquotation by expanding the quasiquotes by hand:
(mac foo (x y . body)
`(bar ,x ,y ,@body))
=>
(mac foo (x y . body)
(cons 'bar (cons x (cons y body))))
This in turn is enough to implement quasiquotes. That is, if you already have eval, Arc macros without quasiquotation, and lists, then we can implement quasiquote on top of that.
I've done a couple of quasiquote implementations already, so I could probably be of some help, if you'd like.
I think you've gotten to the central issue here, which I failed to explain in the OP. Not supporting quasiquote is actually just a consequence of not supporting eval.
The reason for not supporting eval is that this is a source-to-source compiler like CoffeeScript, not a run-time environment. I do not know how to support eval (and perhaps even a meaningful quote operator) without introducing run-time dependencies.
Perhaps an example can illustrate. Here's an expression with lisp on the left and javascript output on the right:
(+ 1 2) 1+2
Now quote it. What should the output be?
'(+ 1 2) '1+2'
'(+ 1 2) ['+', 1, 2]
Both are quoted expressions in some sense; the first is something JS can eval natively, the second requires LavaScript's special eval. The second (if I'm not mistaken) is what you need for quasiquotation, but it would require introducing into the target environment a run-time dependency on LavaScript's eval. Am I making any sense?
What I was suggesting in the OP is that the lack of eval and quasiquotation doesn't actually rule out macros. Rather it forces your macro system to be of a more limited, basic templating variety, but this is still useful. For example, here's how you define def and let using the proposed system [1]:
(mac def (name parms body...)
(= name (fn parms @body)))
(mac let (var val body...)
((fn (var) @body) val))
> I've done a couple of quasiquote implementations already, so I could probably be of some help, if you'd like.
Great, thanks! And you're already helping by talking with me about it. :)
---
[1] I actually have this working already, except for rest parameters and the @ unquote-splicing operator. Since these are used in so many macro definitions, it isn't very useful quite yet. But the proof-of-concept is there.
it would require introducing into the target environment a run-time dependency on LavaScript's eval
Ohhhh, nice limitation.
Technically, if your compile phase ran the code in a LavaScript-capable environment and you kept a running total of all the compiled code in order to output it at the end, then macros should be fine. They're almost never called by the compiled code (just by the thing that was compiling that code earlier), so they're just cruft. I'd be surprised if there weren't a minifier that could cut them out of the result automatically.
On the other hand, if you did that, it would mean running the code in order to set up any definitions the macros use, and thus you would have to have all your run time libraries loaded at compile time--perhaps including the DOM and such--so maybe that's not what you're going for.
Your templating macro system is something that doesn't take advantage of any run time definitions, and is therefore usable from a completely different compile-time environment. Macros that can't execute arbitrary code are pretty dull, though. Maybe what you need is a phase control macro:
## (This is just a sketch. Please don't bother crediting me.)
realMacros = {}
realMacros.atCompileTime = (body) ->
each body, (expr) ->
eval lc(expr)
'null' ## or whatever makes sense as an ignored result
(->
orig = lc
lc = (s) ->
if isList(s) and (s[0] of realMacros)
realMacros[s[0]](s[1..])
else orig(s)
)()
From here, people oughta be able to use (atCompileTime ...) forms to work in compiler-space, where they can modify the realMacros table directly, perhaps to implement a more convenient macro definition macro.
I like your atCompileTime approach. It would seem to open up the kinds of things the compiler can do while still not requiring anything special about the run-time environment. I'm leaning toward something like this.
> Technically, if your compile phase ran the code in a LavaScript-capable environment and you kept a running total of all the compiled code in order to output it at the end, then macros should be fine.
To be sure I understand, would this be kind of like if LavaScript both compiled every expression to JavaScript (as it does now) and ran it through the atCompileTime evaluator, so as to make it available for use in the compiler space?
To be sure I understand, would this be kind of like if LavaScript both compiled every expression to JavaScript (as it does now) and ran it through the atCompileTime evaluator, so as to make it available for use in the compiler space?
That and atCompileTime were almost independent trains of thought, but yeah, I think you understand it.
The compiler would compile a command, write the compiled command to the output file, evaluate the command itself, then repeat with the next command.
for command in input
let compiled = compile( command ) in
output-file.write compiled
execute compiled
Like I said, I don't know if this applies to your case very well, since the compiler's environment may not adequately simulate the actual application conditions.
The second (if I'm not mistaken) is what you need for quasiquotation, but it would require introducing into the target environment a run-time dependency on LavaScript's eval. Am I making any sense?
Not sure :-) To put it in my own words, macros and quasiquotation are expanded at compile-time. Thus there is a compile-time dependency: in whatever language you write your macros in, you need to be able to be able to call functions written in that language from your compiler.
Take Arc as an example. A macro in Arc is an association between the macro name and a function that does the work of expanding the macro. Saying
now, I can write the "expand-foo" function in any language I want, as long as I can call it from the compiler. Here, I happen to have written it in Arc. But I could have written it in Scheme. Or, I could have written it in Javascript, if I had some way of calling Javascript from Arc, such as by shelling out to a Javascript interpreter. All "expand-foo" does is take one list and return another list, so I could write that in any language.
So, if you want to write full-strength macros in LavaScript, you need some way for your compiler to be able to call, during compilation, a function you've previously written in LavaScript.
Which, if LavaScript functions are compiled into Javascript, means if you can call Javascript functions from your compiler.
If you can do that, then you'll also get quasiquotation, because quasiquotation can be implemented as a macro.
If you can't call Javascript functions from your compiler, and if you want full-strength macros, then you'd need to write your macros in some language that you can call from your compiler.
The reason for not supporting eval is that this is a source-to-source compiler like CoffeeScript, not a run-time environment.
I wasn't paying attention and thought you meant LavaScript was written in CoffeeScript...
but I'm starting to doubt that this language can (or should) support full-fledged quasiquotation
I think I'm starting to get it: it's not that you couldn't support full-fledged quasiquotation if you wanted to by writing out an implementation in Javascript, it's that it wouldn't be very useful without being able to write macros via functions written in LavaScript, which in turn would mean that you'd have to be able to support loading LavaScript programs incrementally, which would mean having the LavaScript compiler in the runtime environment.