The lexical environment is (mostly) accessible at run time. The problem is when you're returning a macro from a function, instead of just renaming it.
Suppose you implement packages as tables of symbols. Suppose also that you have a macro stored in said package. There are two ways to use the macro
1.
(= newmac package!newmac)
(newmac args)
2.
(package!newmac args)
I think that most people would like to use the second form, because it doesn't require importing the macro into the current namespace. Unfortunately, the second version requires first class macros.
Obviously, since the compiler can't easily determine that an expression is going to be a macro, it won't have the advantage of compile time optimization. I'm not sure how important that is. Given that arc is already a "slow" language, why not give it more expressive power at the cost of slightly lower speed? Especially since the user can choose whether or not to use them.
So, as I see it first class macros give you several features:
1. easy to make a package system that can contain macros
2. macros fit in better with functions; no wrapping of macros in functions
3. anonymous macros created for mapping across expressions, etc.
evaluation of expr2 and expr3 can be delayed until after expr1 is evaluated. If expr1 turns out to evaluate to a macro, then the literal expr2 and expr3 can be passed to the macro.
I think the example was supposed to set all of the variable names contained in the list xs to contain the value 'blah.
The example wasn't supposed to do anything. I was just foo-bar-baz-ing something. I don't think the example would actually do that anyways, for the same reason the following doesn't work.
arc> (mac m (x) `(= ,x 'blah))
#(tagged mac #<procedure: m>)
arc> (m 'a)
Error: "reference to undefined identifier: _quote"
I guess it would've alleviated confusion had I said
Well, first class macros don't work right now anyway, so I wasn't expecting your example to work. I was just trying to figure out the idea behind the example.
And I think that first class macros would be required to set an arbitrary list of variables to contain an arbitrary list of values via map. Obviously with or let would work in a local scope, but if you want global variables you'd need something like that, or at least a function that wraps the macro.
You should try and see what subset of the language you actually use, and figure out how hard that would be to implement directly on top of scheme.
I don't really know, but the main thing that I see ac providing is the ability to do arbitrary transformations on the first element of a list before it's passed to scheme. This allows functional position definitions for all kinds of objects, and could allow for lots of different kinds of function dispatch (such as dispatch on second arg, which I think pg was planning)
Also, ac (and arc in general) give us a succinctly defined language whose source fits in only two files of less than 1k lines each. This means that the language is easy to learn about and from, and easy to modify. I don't know how many times I've thought "gee, I wish arc could do something cool, like be able to print the source code for global functions" and just gone to the source and added it.
Mind you, that doesn't preclude macros, and in fact arc already is in some ways defined as a set of macros over scheme. It's just that one of those macros is recursive, and visits all of the elements of the source tree ;)
But if performance is what you're after, defining the minimal palatable subset of arc as macros on scheme may be the way to go. It won't be as nice, but it might be a lot faster.
Why not try it, and see? Arc isn't that big, and at least three implementations have been written already.
"the main thing that I see ac providing is the ability to do arbitrary transformations on the first element of a list.."
Ah yes. Everytime I try to enumerate them I miss one. But this page has the whole list :)
"ac (and arc in general) give us a succinctly defined language whose source fits in only two files of less than 1k lines each."
Oh, I'm extremely sensitive to that, believe me. That's what's awesome about arc, and it's kept me using it full-time for 9 months now. Making it a layer of macros would actually not detract from that IMO.
As I've done more with arc I've had to deal with the underlying platform anyway. So I feel like there's no point even trying to abstract that away too much.
"Why not try it, and see? Arc isn't that big, and at least three implementations have been written already."
Yes. You know, in any other circumstances I would consider it a fun experiment. But since I started working on readwarp full-time I constantly worry that I'm going to spend time and burn cash on what's fun rather than what takes the product forward. I'll prob still end up doing this, but just agonize a lot more before I do.
Well, if speed is your main requirement, and you have an actual app that you want to run faster, I would examine it and determine a) what subset of arc you use and b) what subset of arc you need. If it's most of arc, then rewriting it in scheme might be pretty hard.
Of course, first determine how much speed you actually need. And as aw said, rewriting in a faster language won't necessarily help you scale much anyway. Finding where you need the extra speed is definitely a prerequisite to figuring out the best way to get it.
Ultimately, we can't really tell you what to do, because we don't know nearly as much as you do about the needs you have for your application, the bottlenecks that are occurring, or how much time it would cost to re-implement your system on top of scheme directly.
Lisp is great for bottom up design, but whether you can get all the way up to your application before reaching arc first is not something that we can't really judge from here. Certainly some subset of arc can be built as mere macros over scheme. But only you can determine whether that subset is comfortable enough to use, and fast enough to solve your problems.
This defines a proxy-ip function, which is later used to determine whether an ip is abusive or not, by returning the ip either given by sock-accept, or the X-Forwarded-For header.
It seems to me that the easiest way to support X-Real-Ip would be (presuming it's an http header) to let proxy-header be "X-Real-Ip" instead of "X-Forwarded-For". Alternatively, you could expand the functions to be able to take a list of header strings instead of just one.
Actually, I don't think that's a complete solution. It sets the cdr of ls to be the nthcdr of ls. What you want is to set the nthcdr of ls to the nthcdr+1 of ls.
Wow, looking back it was awfully dense of me to have your clear explanation about needing to define a setter and then coming over here to post some hack using join. Sorry for that.
Even though my popnth function works, it's a worse solution because by ignoring setforms it only works for the specific case of pop rather than the whole family of destructive functions. Is this accurate?
I'm a good deal more comfortable with the setter concept after your and thaddeus' examples. Thanks to both of you for your patience with a newb. I've been super impressed with this forum so far: the community is small but outstanding.
Yep. By defining popnth, you get a single function that performs a specific task. But if you define a setform, now any function that wants to operate on a "place" can do so.
This is a different idea. He's pointing out that in general functions can't be printed out, because the function could be represented by something very complicated and hard to print. His example, closures, are function objects that were defined in a lexical context that is relevant to their function. Merely knowing their source code won't be helpful, since two identical looking functions can behave very differently.
If you knew it was a closure, you could probably substitute the variables values for their names, and then print the resulting code, but that requires that you know both what the values are, and the fact that the function is a closure.
Also, I think he's describing the more complicated concept of reverse engineering the actual data type of a function, and printing a representation of it. Taking code that runs, and turning it into code you can read is not an easy challenge. I dare you to macex even a simple function. Taking that and turning it back into something you can read, with meaningful names, is impossible (I think).
At any rate, with 'src I took the shortcut of storing the source code in a table every time a function is defined. This has the disadvantage of only working for global, named functions defined with a small set of function creators (def and mac, mainly) since it uses a global name table. If it stored the string representing the code with the function object itself, it would probably be able to handle local, un-named functions as well, but would still only work with select function creators. This is mainly because you wouldn't want to see the crud added by defop over defop-raw, etc.
So, in general, displaying executable code in a human-readable format is an immense challenge. The more about the original function definition you keep around at run-time, the more easily you can represent it in a human readable form. The hack that is 'src achieves pretty function printing at the cost of storing the whole source code of global functions in a table at run-time. It still lacks a lot of information, and doesn't cover functions defined in local contexts, but it achieves at least a portion of the goal of exploratory, interactive programming.
The answer is based on the fact that 'pop operates on a "place". In other words, it is expecting as input something that can be assigned to.
To see an example:
arc> (= a '(1 2 3 4))
(1 2 3 4)
arc> (pop cdr.a)
2
arc> a
(1 3 4)
You'll notice that not only did pop return 2, it also removed it from a. This functionality is provided to pop by using the macro 'setforms, which relies on a table 'setter that maps forms to functions that can assign to them.
So cdr has a setter defined by
(defset cdr (x)
(w/uniq g
(list (list g x)
`(cdr ,g)
`(fn (val) (scdr ,g val)))))
but nthcdr doesn't currently have a setter defined for it.
So, if you want nthcdr to work with pop and similar destructive functions, you will need to define a setter for it.
Yep, the trailing t and lack of newline are due to ppr.
Fortunately, I also happen to have an alternative version of ppr on anarki, under the lib folder, which has just been updated to handle multiple expressions and print newlines.
Just pull anarki again, and use
(load "lib/ppr.arc")
It should redefine sp, len, and ppr, and make source code printing much more readable than the pprint.arc version ;)