Arc Forumnew | comments | leaders | submit | aw's commentslogin

I imagine that my complaints about GitHub Pages at the time were probably just growing pains on GitHub's part.

However exactly for the reason of implementing our own features at some point such as the cross references you mention I expect that we're going to want to do our own processing. Which suggests that GitHub Pages or the arclanguagewiki on Google Sites might be part of the right long term solution, but only if there's a way to e.g. insert the piece of an API reference... which we're generating.

Here's a thought. What if we had a server which went out and gathered documentation source material from various places such as Anarki. (GitHub has http://help.github.com/post-receive-hooks/ so the server could get notified of new pushes to Anarki instead of having to poll).

The server would work on the text of the sources, such as docstrings found in the Anarki source code. That way even if someone pushed something malicious to Anarki then we wouldn't have a security problem (either on the server or in the reader's browser). The server would process the documentation source material and generate static HTML files... which could be hosted on S3 or GitHub Pages. This would have an additional advantage that even if the server were down, the documentation itself would still be up and available.

-----

1 point by rocketnia 5359 days ago | link

Post-receive hooks! Great find. XD

---

"The server would work on the text of the sources, such as docstrings found in the Anarki source code."

With this approach, people might be pushing to Anarki way more, sometimes using in-browser file edits on GitHub, and the server would have to scrape more and more things each time. Then again, that would be a good problem to have. :-p

---

"That way even if someone pushed something malicious to Anarki then we wouldn't have a security problem (either on the server or in the reader's browser)."

By the same token, it would be harder for just anyone to update the server, right? Eh, that might be a necessity for security anyway.

Potentially, parts of the server could run Arc code in a sandbox, incorporating the Arc code's results into the output with the help of some format that's known to have no untrusted JavaScript, like an s-expression equivalent of BBCode or something.

-----

1 point by aw 5359 days ago | link

parts of the server could run Arc code in a sandbox

What sort of Arc code were you thinking of?

-----

1 point by rocketnia 5359 days ago | link

Well, code that generates page contents.... Suppose I want to put "prev" and "next" links on several pages, or suppose I want an API reference to automatically loop through and include all the docstrings from a file. Lots of this could be up to the server to do, but I'd like for the documentation itself to have some power along these lines. For instance, someone might write a DSL in Arc and want to set up a whole subsite covering the DSL's own API.

Besides that, it would just be nifty to have the Arc documentation improve as people improved the Arc libraries and vice versa.

-----

1 point by aw 5359 days ago | link

Suppose I want to put "prev" and "next" links on several pages, or suppose I want an API reference to automatically loop through and include all the docstrings from a file.

I'd just have the server code do that.

For instance, someone might write a DSL in Arc and want to set up a whole subsite covering the DSL's own API.

Sorry, not following you here. How would this be different?

Besides that, it would just be nifty to have the Arc documentation improve as people improved the Arc libraries and vice versa.

Certainly. Naturally the server code can be written in Arc itself.

-----

1 point by rocketnia 5359 days ago | link

Say this DSL is a stack language written in Arc, called Starc, and Starc programs are implemented by lists of symbols. I've set up a global table to map from symbols to their meanings, and I have a 'defstarc macro that submits to that table and supports docstrings.

  (defstarc word4
    "Demonstrates stuff."
    word1 word2 word3)
Now I want my language to have documentation support that's seamless with Arc's own documentation. Somehow I need my Starc documentation to be split across multiple pages, with some pages created using the 'defstarc docstrings. I want Starc identifiers to be displayed in a different style than Arc identifiers, but if anything, I want it easier for a Starc programmer to refer to Starc identifiers in the documentation than to Arc identifiers.

So every time I come up with one of these requirements for the documentation, I should submit a patch to the server or something? Fair enough--the code implementing the documentation oughta be documented somewhere too, and keeping it close to the project also makes it more ad hoc and inconsistent--but I think this would present a bit of an obstacle to working on the documentation. I'd rather there be a compromise, where truly ad hoc and experimental things were doable in independent projects and the most useful documentation systems moved to the server code gradually.

This would be more complicated to design, and it could probably be incorporated into a more authoritarian design after it's underway, so no worries.

-----

2 points by aw 5358 days ago | link

Well, I imagine there'd be two stages:

- you run a copy of the server code you're working on locally, until you see that your "Stark" documentation is being integrated into the rest of the documentation in the way that you want it to

- you push your changes to the server (say, via github for example) and they go live

OK, but what if you're a completely random person, you've never posted anything to arclanguage.org, no one knows who you are, and you want write access to the server so that you "can do stuff". Alright, fork the repo on github, push your changes there, and send a pull request. Then when you turn out to be someone who isn't trying to install malicious Javascript you are given write access to the server repo yourself. (This is pretty standard approach in open source projects, by the way).

But... what if write access to the server repo ends up being controlled by an evil cabal of conservatives who reject having any of this "Starc" stuff added? Fire up your own server, publish the documentation pages yourself, and people will start using your documentation pages because they are more complete than the old stuff.

My concern with the sandbox idea is that I imagine it's going to be hard to create a sandbox that is both A) powerful enough to be actually useful, and B) sufficiently constrained so that there's no possible way for someone to manage to generate arbitrary Javascript.

I'm finding this discussion very helpful, by the way. What I'm spending my time on now is the "pastebin for examples" site. I've been wondering if this project would stay focused on just the examples part (with the ability for other documentation sites to embed examples from the pastebin site) or if it would expand to be a site for complete documentation itself (the "code site for Arc" idea).

For the pastebin site I've thrown away several designs that weren't working and I've found one that so far does look like it's going to work. But, the catch is that by design it allows the site to execute arbitrary code in the target machine that's running the example. This isn't too terrible by itself (you can always run the example in a virtual machine or on an Amazon EC2 instance etc. instead of on your own personal computer if you want), but it does mean that the "pastebin for examples" site is going to need a higher level of security than an Arc documentation site.

Which in turn implies that while the Arc documentation site can use examples from the pastebin site (if people find it useful), the pastebin site itself shouldn't be expanding to take on the role of the Arc documentation site (since the Arc documentation site can and should allow for a much freer range of contributions).

-----

1 point by rocketnia 5358 days ago | link

"But... what if write access to the server repo ends up being controlled by an evil cabal of conservatives who reject having any of this "Starc" stuff added?"

The main thing I'm afraid of is the documentation site becoming stagnant. Too often, someone finds the arclanguage.org website and asks "How do I get version 372 of MzScheme?" Too often, someone who's been reading arcfn.com/doc the whole time finally looks at the Arc source and starts a forum thread to say "Look at all these unappreciated functions!" ^_^

I don't blame pg or kens; I blame the fact that they don't have all the time in the world to do everything they want. I'm in the same position, and I bet it's pretty universal.

---

"Fire up your own server, publish the documentation pages yourself, and people will start using your documentation pages because they are more complete than the old stuff."

That could be sufficient. But then while I'm pretty active on this forum, I'm not sure I have the energy to spare on keeping a server up. If the community ends up having only people as "let someone else own it" stingy as me, we'll be in trouble. >.>;

---

"My concern with the sandbox idea is that I imagine it's going to be hard to create a sandbox that is both A) powerful enough to be actually useful, and B) sufficiently constrained so that there's no possible way for someone to manage to generate arbitrary Javascript."

All I'm thinking of is some hooks where Arc code can take as input an object capable of querying the scrape results and give as output a BBCode-esque representation that's fully verified and escaped before use. But then I don't know if that would be sophisticated enough for multi-page layouts or custom styles or whatnot either. ^^;

There could also be another Arc hook that helped specify what to scrape in the first place... but in a limited way so that it couldn't do denial-of-service attacks and stuff. ^^; lol

Partly it's just a curiosity for me. I like the thought of letting Arc code be run in a sandbox for some purpose, even if it's only marginally useful. :-p

---

Meanwhile, I had another thought: Even if the server doesn't allow running arbitrary code, people could still develop special-purpose things for it by running their own static site generators and putting up the output somewhere where the server will crawl. I wonder how this could affect the server design.

-----

2 points by aw 5358 days ago | link

But then while I'm pretty active on this forum, I'm not sure I have the energy to spare on keeping a server up.

I'd be happy to run the server, and set up some kind of simple continuous deployment system so that when someone makes a code push to the server repo the code goes live.

Depending on availability and motivation I may (or may not...) end up having time myself to get Ken's documentation into a form where it can be edited (he generously offered last year to let us do this).

A part that I don't have motivation to do myself is writing the code that would crawl Anarki and generate documentation from the docstrings.

I like the thought of letting Arc code be run in a sandbox for some purpose, even if it's only marginally useful.

I certainly won't prevent someone from adding a sandbox to the server. On the other hand... if you'd like to work on something where a sandbox would be useful ^_^, I'd encourage you join me in my API project :-)

-----

1 point by SteveMorin 5358 days ago | link

"The main thing I'm afraid of is the documentation site becoming stagnant. Too often, someone finds the arclanguage.org website and asks "How do I get version 372 of MzScheme?" Too often, someone who's been reading arcfn.com/doc the whole time finally looks at the Arc source and starts a forum thread to say "Look at all these unappreciated functions!" ^_^ I don't blame pg or kens; I blame the fact that they don't have all the time in the world to do everything they want. I'm in the same position, and I bet it's pretty universal."

I think if contributing is open and flexible people will contribute to keep the site up todate. Complete and simple instructions must exist to help and encourage people to contribute. Some is social where people feel they need "permission" to contribute.

The interesting thing I am seeing among the experimentation and projects people are doing here is the fragmentation. I think experimentation with languages are great and very necessary but it's difficult to see there isn't a main champion for the community to rally behind.

-----

1 point by SteveMorin 5358 days ago | link

PS stupid question how are you italicizing quoted text. I tried adding <i>some text</i> but that didn't work. I haven't had enough time to play with the comments to figure that out.

-----

1 point by aw 5358 days ago | link

http://news.ycombinator.com/formatdoc

-----

1 point by SteveMorin 5359 days ago | link

I like the sound of this setup as a Arc site, that generate static pages. Could even have that push those pages back to the git repo.

-----

1 point by Pauan 5359 days ago | link

"The server would work on the text of the sources, such as docstrings found in the Anarki source code. That way even if someone pushed something malicious to Anarki then we wouldn't have a security problem (either on the server or in the reader's browser)."

If it ever got to the point where actually eval'ing the code were necessary/desirable, you could do so in a safe namespace in PyArc (hint hint).

-----


since anyone can contribute to anarki

Something to keep in mind is that while many people use Anarki, others base their work on Arc 3.1 or different runtimes such as the JVM, Python, etc.

-----


Arc already supports docstrings

Actually docstrings are an Anarki enhancement.

-----

1 point by SteveMorin 5359 days ago | link

a key Anarki enhancement. Does anyone know if Anarki has keyword arguments too?

-----


The code site that I'm working on is in fact primarily driven by a desire to fix the documentation problem.

My approach is to make documentation more wiki-like so that multiple people can easily contribute. I find examples particularly useful in documentation, so a "pastebin for examples" is another major part of my work.

I hope that this will turn out to be more useful than the traditional approach of embedding documentation in the program source because it will make it easier for other people to contribute to the documentation.

-----

1 point by rocketnia 5360 days ago | link

You beat me to the punch by 25 minutes. ^_^ My reply below (at http://arclanguage.org/edit?id=14084) covers the same sort of stuff as yours but in a longer and more scatterbrained way.

-----

2 points by aw 5363 days ago | link | parent | on: Arc3.1 optimizations

eradicate all meanings of + that aren't addition

Also note that the Racket + actually performs two different kinds of addition (exact or inexact/floating point) depending on the arguments passed to it:

  $ racket
  Welcome to Racket v5.0.2.
  > (+ 5 1/3 2)
  22/3
  > (+ 5 1/3 2.)
  7.333333333333333
So if you want to get "the arithmetic + without the join +" you may also want a way to get "the exact addition +" or "the floating point addition +".

-----

4 points by shader 5363 days ago | link

Why? I'm not sure I understand the extreme dislike of overloaded operators. Not that this comment is necessarily directed at yours, but the concept in general.

If it's because of efficiency, the racket operator is probably fairly efficient anyway, and might even do some compile time type checking to see whether the result is likely to be one or the other.

At some point, instead of having nice operators to encapsulate the alternatives and paying the cost of minimal dynamic checking (if any), you'll end up building your own much more expensive explicit dynamic checking instead.

Also, I'm not sure how this urge for separation of functions can contribute much to arc's main goals of brevity and simplicity. Having one function with a short name to handle many very closely related operations makes plenty of sense most of the time, and lets you use plain + instead of +-float and +-int.

-----

4 points by aw 5362 days ago | link

Oh, for myself I routinely use + for joining lists, so I'm firmly in the general purpose + camp.

My point was merely that whatever argument someone may have for preferring an arithmetic-only +, there is an isomorphic argument for more specifically preferring an exact-+ or a floating-+, and so if the former is applicable to your situation you may also want to consider if the latter also applies.

-----

5 points by waterhouse 5349 days ago | link

Hi, finally replying to this thread. So, I, a long time ago, found that the + function was allocating memory every time it was called; this a) made things slower than they had to be (according to my old thread, incrementing a variable was half as fast as it should have been), b) triggered garbage collections, which were annoying pauses at the REPL, and c) triggered garbage collections when I was doing, like, (time:repeat 100000 <test procedure>) (note that "repeat" is implemented with "for", which uses +).

The problem with (c) is that it made the time to completion rather variable (depending on whether a GC happened, or sometimes on how many GCs happened), so the results were difficult to interpret. It interfered with my measuring and comparing the performance of my procedures.

So it annoyed me. And I found that just using the Scheme + fixed it, stopped the malloc'ing. To know that this crap was caused by a feature I didn't use and that Paul Graham had seemed to declare was a bad idea, a feature that seemed it shouldn't even be there anymore... I took it upon myself to eradicate it from the source code. And I felt better afterwards.

But now this case-lambda version of + (which is polymorphic) seems to not malloc at all, and a bit of testing by me indicated it performed approximately as well as the raw Racket +; I couldn't see any difference. (Btw, I think I perceived that Racket doesn't malloc for argument lists when you call "apply" or something, but it does malloc--probably copies the arglist--if you access it with car or cdr.)

So polymorphic + is no longer so obnoxious to me, and I no longer feel the need to eradicate it. I'd probably still prefer to use "string" to join/coerce to strings and "join" to join lists, myself, and I'd probably prefer "join" as a generic join operator. But I won't kill concatenate-+ on sight. Since overloading itself seems not to be the cause of the performance problems, I might warm to the idea... maybe overload + for my polynomials--or for matrices, I've been playing with them recently--though I've actually only multiplied them, never added them--perhaps I should overload *...

-----

2 points by aw 5349 days ago | link

How did you measure how much memory was being allocated by the various alternatives?

-----

3 points by waterhouse 5348 days ago | link

You may find the following definitions extremely useful. (I do.) Evolved from my previous "mem-use" macro.

  ;note that "memory" = $.current-memory-use
  ; and "current-gc-milliseconds", "current-process-milliseconds"
  ; were supplied in ac.scm
  
  (= gc-msec      current-gc-milliseconds
     process-msec current-process-milliseconds)
  
  (mac utime body
    (w/uniq (gtime ggc gmem)
      `(with (,gtime (msec) ,ggc (gc-msec) ,gmem (memory))
         (do1 (do ,@body)
              (prn "time: " (- (msec) ,gtime)
                   " gc: " (- (gc-msec) ,ggc)
                   " mem: " (- (memory) ,gmem))))))
  
  (= time utime)
You might expect some memory or time overhead just from using this, but in fact there doesn't seem to be any:

  arc> (time:- 1 2)
  time: 0 gc: 0 mem: 0 ;zero overhead
  -1
  ;now I copy the old definition of + from ac.scm to clipboard
  arc> (= + (eval:list '$ (read:pbpaste))) 
  #<procedure>
  arc> (time:+ 1 2)
  time: 0 gc: 0 mem: 352 ;probably from JIT-compiling +
  3
  arc> (time:+ 1 2)
  time: 0 gc: 0 mem: 64
  3
  arc> (time:+ 1 2)
  time: 0 gc: 0 mem: 64 ;by now this is clearly the usual mem-use
  3
  arc> (time:+ 1 2)
  time: 0 gc: 0 mem: 64
  3
  arc> (= + $.+) ;Racket +
  #<procedure:+>
  arc> (time:+ 1 2)
  time: 0 gc: 0 mem: 0
  3
  arc> (time:+ 1 2)
  time: 0 gc: 0 mem: 0
  3
  ;now I copy this definition to clipboard:
  (define (ar-+2 x y)
    (cond ((char-or-string? x)
           (string-append (ar-coerce x 'string) (ar-coerce y 'string)))
          ((and (arc-list? x) (arc-list? y))
           (ac-niltree (append (ar-nil-terminate x) (ar-nil-terminate y))))
          (#t (+ x y))))
  arc> (eval:list '$ (read:pbpaste))
  #<void>
  arc> (time ($.ar-+2 1 2))
  time: 0 gc: 0 mem: 416 ;JIT
  3
  arc> (time ($.ar-+2 1 2))
  time: 0 gc: 0 mem: 0 ;so it allocates zero memory
  3
  arc> (time ($.ar-+2 1 2))
  time: 0 gc: 0 mem: 0
  3
Now for more playing around. A Racket loop that counts to 1 million.

  arc> (time:$:let loop ((n 0)) (if (> n 1000000) 't (loop (+ n 1))))
  time: 3 gc: 0 mem: 760
  t
  arc> (time:$:let loop ((n 0)) (if (> n 1000000) 't (loop (+ n 1))))
  time: 5 gc: 0 mem: 920 ;kinda weird, it alternates 760 and 920
  t
  ;now up to 10 million
  arc> (time:$:let loop ((n 0)) (if (> n 10000000) 't (loop (+ n 1))))
  time: 36 gc: 0 mem: 760
  t
  ;now we test ar-+2
  arc> (time:$:let loop ((n 0)) (if (> n 10000000) 't (loop (ar-+2 n 1))))
  time: 1020 gc: 0 mem: 1096
  t
  arc> (time:$:let loop ((n 0)) (if (> n 10000000) 't (loop (ar-+2 n 1))))
  time: 1019 gc: 0 mem: 776
Apparently it makes a significant difference (30x) inside a tight Racket loop. For fun, let's try the unsafe ops too.

  arc> ($:require racket/unsafe/ops)
  #<void>
  arc> (time:$:let loop ((n 0)) (if (unsafe-fx> n 10000000) 't (loop (unsafe-fx+ n 1))))
  time: 28 gc: 0 mem: 760
  t
Hmm, I had an idea. Put the number check first. New definition:

  (define (ar-+2 x y)
    (cond ((number? x) (+ x y))
          ((char-or-string? x)
           (string-append (ar-coerce x 'string) (ar-coerce y 'string)))
          ((and (arc-list? x) (arc-list? y))
           (ac-niltree (append (ar-nil-terminate x) (ar-nil-terminate y))))
          (#t (+ x y))))
  
  arc> (eval:list '$ (read:pbpaste))
  #<void>
  arc> (= + $.ar-+2)
  #<procedure:ar-+2>
  arc> (time:repeat 1000000 nil)
  time: 122 gc: 0 mem: 3256
  nil
  arc> (time:repeat 1000000 nil)
  time: 121 gc: 0 mem: 1880
  nil
  arc> (time:$:let loop ((n 0)) (if (> n 10000000) 't (loop (ar-+2 n 1))))
  time: 310 gc: 0 mem: 776
  t
  arc> (time:$:let loop ((n 0)) (if (> n 10000000) 't (loop (ar-+2 n 1))))
  time: 323 gc: 0 mem: 936
  t
What, now the Arc loop goes faster than it did with the Racket +. Probably because this function assumes two arguments, while Racket + assumes any number. And now the Racket loop is only 10x slower than with Racket +. (I expect Racket knows how to optimize its own 2-argument +.) By the way, in case you think the Arc loop goes faster than Racket, note that the Arc loop goes to 1 million and the Racket loop to 10 million; Racket is here going 3x as fast as Arc (and if Racket used Racket +, it'd be going 30x as fast).

Oh, um, I should test the case-lambda thing. I copy to clipboard my definition of + from the Optimizations page:

  arc> (cp)
  #<procedure:zz>
  arc> (time:repeat 1000000 nil)
  time: 125 gc: 0 mem: 3096
  nil
  arc> (time:repeat 1000000 nil)
  time: 128 gc: 0 mem: 2200
  nil
Virtually no difference from the raw ar-+2. Just for reference, so I can be sure ar-+2 only goes faster than Racket + in the Arc loop because ar-+2 effectively declares that it takes only two arguments:

  arc> (= + ($:lambda (x y) (+ x y)))
  #<procedure:zz>
  arc> (time:repeat 1000000 nil)
  time: 115 gc: 0 mem: 2776
  nil
  arc> (time:repeat 1000000 nil)
  time: 114 gc: 0 mem: 2040
  nil
Good, I'm not going crazy. Anyway, that final version of ar-+2 is my official recommendation.

Oh, incidentally, this "time" thing (though I think it was just "mem-use" back then) also helped me figure out that "<", ">", and "is" were allocating memory (provoking my work behind the Optimizations page). For example, I found that (no x) malloc'd; I eventually realized that this is because "is" mallocs, and (no x) is defined as (is x nil).

Also, here's some further relevant usage of "time":

  arc> (time:no:= xs (n-of 1000000 0))
  time: 2351 gc: 331 mem: 32513488
  nil
  arc> (time:no:zap nrev xs)
  time: 152 gc: 0 mem: 632 ;hell yeah, nrev works well
  nil
  arc> (time:no:zap rev xs)
  time: 254 gc: 0 mem: 32000584
  nil
  ;now here's when it GC's
  arc> (time:no:zap rev xs)
  time: 371 gc: 117 mem: 31824248
  nil
  arc> time:len.xs ;this is my fixed "len"
  time: 8 gc: 0 mem: 0
  1000000
  ;now, for comparison, let's see the old definition of "len" from ac.scm
  arc> (= len (eval:list '$ (read:pbpaste)))
  #<procedure>
  arc> time:len.xs
  time: 461 gc: 261 mem: 68936656 ;OMG this is ridiculous
  1000000

-----

1 point by Pauan 5363 days ago | link

I'm fine with overloaded functions (assuming it makes sense to overload them). The problem isn't that + is overloaded to accept multiple types (that's fine). The problem is that it's trying to fulfill two different purposes at the same time (addition and concatenation), so there's inconsistencies with regard to numbers.

Thus, separating those two behaviors into two separate functions can make sense. In this case, it would be `join` being used for the current concatenate behavior, and `+` being used for numeric addition only. This would also allow us to sanely fix certain behaviors:

  (+ 5 "foo")    -> error
  (join 5 "foo") -> "5foo"
One downside with that approach is that `join` is 3 characters longer than `+`. Oh, and it would require changing `join` so it accepts non-conses, but I would consider that a good change, to be honest. It would allow for stuff like this:

  (join '(1 2 3) 4) -> (1 2 3 4)
By the way, I don't think it's a super big deal whether + handles non-numeric types or not. But at the same time, I can see some of the reasoning behind splitting addition/concatenation into two different functions.

P.S. With regard to the performance of a highly abstract Lisp written in itself, running on top of the Racket interpreter: given how slow Arc is in general, I doubt + is going to be the bottleneck.

P.P.S. I believe aw's post was mostly about preserving accuracy in those situations where you can't have any sort of rounding errors. Obviously the default + should behave as it currently does, coercing to the more abstract numeric type. However, in the situations where you need that precision, it might be nice to have a specialized + for that, like +-precise or whatever.

-----

2 points by evanrmurphy 5362 days ago | link

"The problem is that it's trying to fulfill two different purposes at the same time (addition and concatenation), so there's inconsistencies with regard to numbers."

If your numbers were Church encoded, then I guess there wouldn't be a difference between addition and concatenation. XD (This is given that concatenating functions is the same as composing them.)

-----

1 point by Pauan 5362 days ago | link

Yeah, sure, and let's just use lambdas for everything:

  (def cons (x y) (fn (f) (f x y)))
  (def car (x) (x (fn (a b) a)))
  (def cdr (x) (x (fn (a b) b)))

  (car (cons 'a nil)) -> a
:P

On a semi-related note, I had an interest a while back for implementing a Lisp interpreter using only lambda-calculus. I'm still not sure if it's possible or not, but I wanted to give it a try.

-----

1 point by aw 5367 days ago | link | parent | on: Module idea for PyArc

Then you could do foo.'a or foo."a", something I've desired many times.

Can someone post a summary of how they would like ssyntax to work? Along with getting the syntax to work in more cases, I'm vaguely aware for example that people have preferences as to whether a.b.c should be (a (b c)) or ((a b) c), but I don't know what they are.

-----

1 point by aw 5367 days ago | link | parent | on: Module idea for PyArc

As an aside, eval in my own runtime project works the same way: you can pass it an optional second argument to use as the namespace.

A common criticism of this style of module implementation is that it doesn't provide for a way to avoid namespace collisions on prerequisite macros. E.g., if I import a macro foo which expands into bar, I have to import bar as-is (I can't rename it to something else, because then the expansion of foo would break). It doesn't personally bother me though, since I always go and easily rename bar in the source if it's causing a problem for me.

-----

1 point by Pauan 5367 days ago | link

Okay, so, I gave this some thought. I might be able to do some kind of hacky thing where when a macro expands, it first checks for variables in the macro's scope. Then it checks for variables in the defining module's scope. Then it checks for variables in the importer's scope.

Sounds really hacky. I may need to use name-munging, but if I implement it at the interpreter level, at least ordinary code shouldn't be aware of it. Ideally it would be completely transparent.

-----

1 point by rocketnia 5367 days ago | link

Oh yeah, this is an interesting issue. It's one reason I'm going with hygienic macros in Penknife, besides simple cleanliness. :-p The technique I use in Penknife, as it turns out, is an example of syntactic closures, but I didn't know it until well after I had an okay system in place. I found out when I read this from http://lambda-the-ultimate.org/node/4196:

"Other frameworks for lexically-scoped macros, notably syntactic closures (Bawden and Rees 1988) and explicit renaming (Clinger 1991), use a notion of lexical context that more directly maps to the programmer's view of binding scopes. Unfortunately, the more direct representation moves binding information into the expansion environment and, in the case of syntactic closures, tangling the representation of syntax and expansion environments."

I had exactly this entanglement happening in Penknife at one point. At the time a Penknife macro is defined, the local environment is captured, and later on, syntax generated by the macro is wrapped up in a way that refers back to that environment. At one point I was just putting the environment itself in the wrapper.

For your purposes, that might be just fine, especially if you're just going to eval the macro's result right when you get it. Then again...

PyArc's modules sound exactly like I want Penknife's (not yet existing) modules to be, and you seem to value the ability of the library users to configure the libraries to their own purposes, much like I do. I suspect, in our languages, programs will need to load the same library multiple times for different purposes, especially when it comes to diamond dependencies, where two individual libraries might depend on a common third library but configure it in different ways.

With Penknife's approach to extensible syntax, it turns out parsing is really slow, so I've organized the language so that it can parse a library in one pass and then run the resulting code over and over. ("Parse" basically means "compile" in Penknife, but Penknife does one or two additional things that could be called compilation, so I shy away from that word altogether.) That exposes a weakness of tangled syntactic closures: The compiled representation holds the original captured environment, rather than the one that should be used this time around.

So for Penknife I ended up storing the environment in the macro itself and complicating my environments so that variables were referred to based on what sequence of macro-unwrapping it took to get to them. Since the person using the macro obviously has the macro in their namespace, it comes together pretty nicely. I'm happy with the result, but it's not nearly as simple as (= env!foo 4), so I can't expect you to follow the same path. ^^

-----

1 point by Pauan 5367 days ago | link

In PyArc, right now, we just re-parse and re-eval it, so loading the same module twice is not a problem.

Okay, so, I fixed the issue with it thinking a variable was undefined when it wasn't. I then found out something very interesting:

  ; foo.arc
  
  (assign message* "hello")
  (mac ret () 'message*)
  
  ; bar.arc
  
  (import foo "foo.arc")
  (foo!ret) -> "hello"

  (= ret foo!ret)
  (ret) -> error: message* is undefined
So... without even trying to, I automagically made macros sorta-hygienic with modules. When you import a module, and then call it's macros using the foo!bar table convention, it works just fine. The macro will use it's defining environment, rather than the caller's environment. But... if you pull the macro into the current namespace, it dies, because it's now being eval'd in the caller's namespace.

I'm pretty okay with this. It's kind of a wart that macros don't work quite right in that situation, but the fact that they do work when using the table is pretty darn nice. This also theoretically gives the caller the choice of whether to eval the macro in the defined environment, or the caller environment. The only issue, I think, is that this behavior may be confusing, at least until you've been bitten by it once or twice; by then you should have it memorized. :P

Obviously this is just a simple test... the real test will be when it's released and we'll have more complicated macros. Then we'll see if there's still any major hygiene problems or not.

-----

1 point by shader 5367 days ago | link

Sounds like you're looking for some kind of hygienic macros, and may have rediscovered the logic behind scheme adopting them in the first place.

It is possible that properly handling modules at the core language level requires that either macros are only ever expanded and evaluated in their original context, or hygiene must be used to encapsulate that context in the macro-expansion. Or leave it without restrictions, and hope people follow the guidelines.

Not all hygienic macro systems are cumbersome or complicated, and it's possible that we could create one that permits selective breaking of hygiene as desired. One candidate would be an implementation of SRFI 72: http://srfi.schemers.org/srfi-72/srfi-72.html

Hygiene has been discussed on this forum, and I think a few systems may exist already.

-----

1 point by Pauan 5367 days ago | link

Wouldn't name-munging solve it also? Suppose, for instance, that you have two modules:

  ; foo.arc
  
  (= message* "hello")
  (mac ret () message*)
  
  
  ; bar.arc
  
  (import (ret) "foo.arc")
  (ret) -> error: message* is undefined
  
Okay, but what if every symbol was transparently prefixed with the module's name, or a gensym or something? Something like this:

  ; foo.arc
  
  (= foo@message* "hello")
  (main@mac foo@ret () foo@message*)
  
  
  ; bar.arc
  
  (main@import (bar@ret) "foo.arc")
  (bar@ret) -> "hello"
  
This is all handled by the interpreter, so normal users don't see it. Anyways, now `ret` would expand to this:

  (foo@message*)
  
Which would refer to the correct binding in the correct namespace. Hm... but then what if the macro actually does want to grab a value defined in bar.arc? Tricky.

-----

1 point by rocketnia 5367 days ago | link

I think your question is really "Wouldn't name-munging accomplish hygiene?" :)

What you're talking about is pretty similar to how Racket accomplishes hygiene by way of 'read-syntax. Everything's wrapped up so you know what its source file is, and that's all I know for now. :-p Seems Racket's system is pretty cumbersome, but then that's probably because it's very static, with modules revealing what they export before any of the exported values are actually calculated.

What you're talking about also sounds extremely similar to Common Lisp's approach to namespaces. I don't know that approach very well, but it could act as some sort of example. ^^

-----

1 point by Pauan 5367 days ago | link

Hm... that's a good point. I guess that's a good reason for implementing name-munging schemes. I'll have to consider that.

-----

1 point by aw 5367 days ago | link | parent | on: Programmable ssyntax in Arc

but then if you overwrite the global variable, ssyntax will no longer work

This is true of most things in Arc. (I.e., if you overwrite car, ssyntax -- along with everything else -- will also no longer work).

-----

1 point by Pauan 5367 days ago | link

Yes, but I consider it different because ssyntax-rules* is just an alist, so sometimes it might make sense (or be more convenient) to overwrite it.

In other words, if you overwrite a function, that function obviously won't work any more. But alists are just data, so it makes sense to be able to overwrite them with equivalent data and have it still work.

Yes, technically speaking alists can be evaled, and function code doesn't have to be seen as code (macros, for instance), but at the same time I don't want to enforce a "you can't overwrite this" principle unless I have to.

Consider this. In an Arc program, if you have an alist stored in the variable foo* and somebody else overwrites foo* with a different alist, your code will still work (albeit using the new alist, rather than the old one).

But in this case, the Arc interpreter has magical powers that cause it to break if you overwrite the alist. It wouldn't even break in the sense of "your program crashes". Instead, it would continue to use the old alist, like as if you had never done the assignment at all. But now you can't add any new ssyntax. This is certainly different than what the user (the one doing the assignment) would expect!

Is it possible to simulate that in Arc? If not, then I'd rather follow the principle of least surprise. Either way, it's confusing to have the assignment silently fail in that particular way.

In any case, it's not a huge deal, and it's mostly an implementation detail, but if somebody has an idea for a better system that doesn't have that problem, I'd like to hear it.

-----

1 point by aw 5367 days ago | link

I guess I don't understand the problem then. I thought by your example you were concerned that someone might overwrite ssyntax-rules* with invalid data.

So why wouldn't it work if I overwrote ssyntax-rules* with valid data?

(I.e., by analogy, it's no problem to redefine a function in Arc as long as I give it a working function).

-----

1 point by Pauan 5367 days ago | link

Because my idea was that ssyntax-rules* would be a special data-type (rather than an ordinary alist). This would let me give it a hook, so I can know whenever somebody gets/sets to it. With those hooks, I could easily implement optimizations that could potentially speed things up a lot. Without those hooks, I might not be able to apply those optimizations.

But if you overwrite the global variable, the new value (an ordinary alist) would not have those special hooks, which means the Arc interpreter is no longer notified about changes, and thus, the internal table is never updated.

Like I said, it's an implementation detail. The problem is that if I decide to do it like that, this particular implementation detail is visible to the user in a nasty way. Thus, I'm wary about doing it like that. I guess I should wait until it's actually implemented, then profile to see if it's a big performance hit or not.

-----

1 point by aw 5367 days ago | link

Oh, I see. Here's a thought: another place to check whether ssyntax-rules* has changed is at the start of reading or evaling an expression.

The tradeoff there is that we'd no longer be able to change ssyntax and use the new ssyntax in the same expression. That is, this would work:

  (make-changes-to-ssyntax)
  use->new%@ssyntax
but this wouldn't:

  (do (make-changes-to-ssyntax)
      use->new%@ssyntax)

-----

1 point by Pauan 5367 days ago | link

Yeah, I had the same idea; only checking ssyntax-rules* at the start of each top-level form. That's a reasonable tradeoff too, but since I don't expect the rules to change frequently, it's still far less efficient.

As hacky and clunky as it may be, it looks like hacking eval would be my best bet, as far as efficiency goes (while still behaving as the user expects). I might be able to generalize it somewhat, in case I want to add hooks to other stuff later.

The only major disadvantage is that it works based on the name, not the value. Still better than the alternative, though. Actually, come to think of it, working based on the name is what the user expects, so that should be just fine.

-----

1 point by rocketnia 5367 days ago | link

If at some point you say this in Arc...

  (def foo ()
    (look-up-something-in syntax-rules*))
...then (foo) will always use the most recent version of 'syntax-rules* , regardless of how many times it's been reassigned since the definition of 'foo.

This is one reason it's challenging to do modules in Arc. You have to construct 'foo so that its references to 'look-up-something-in and 'syntax-rules refer to the meanings that exist at the time 'foo is defined (already done in Arc, but something to consider as you do things from scratch), and then you have to somehow make the same name have different meanings in different contexts (hard to do on top of official Arc, thanks to Racket's opaque namespaces, but very possible to do from scratch).

I think the least surprising thing to do would be to have your core functionality work the same way as 'foo does in Arc: It would look up the value of 'syntax-rules* , and that particular binding (meaning) would be a fundamental constant of the language. Ultimately, that's the same thing as hardcoding the name, but with an extra layer of indirection to account for modules.

-----

1 point by Pauan 5367 days ago | link

I'm not sure what you mean. Do you mean that it would always refer to the most recent version of syntax-rules* in the current module? Isn't that expected behavior? If you mean it would refer to the most recent version defined in any module, then that shouldn't be an issue, since they are/should be isolated.

My eval hack is only about updating the internal table rules when somebody changes ssyntax-rules* This avoids checking it too frequently, potentially making it run faster. So, the following should work fine:

  (= foo ssyntax-rules*)
  (= ssyntax-rules* nil)
  
  foo            -> previous ssyntax rules
  ssyntax-rules* -> nil

  (= ssyntax-rules* foo) -> works fine again
And yes, I plan to have lexical scope in PyArc. Each function has it's own environment, and a reference to the outer environment, so it can walk up the chain until it either finds the value or realizes it's undefined.

There's a bug with it right now, but I should be able to fix it. The environments get created properly, but when doing the actual lookup, it thinks the value is undefined when it isn't. That's pretty normal, since the interpreter is still missing lots of functionality.

-----

1 point by rocketnia 5367 days ago | link

"I'm not sure what you mean. Do you mean that it would always refer to the most recent version of syntax-rules * in the current module? Isn't that expected behavior? If you mean it would refer to the most recent version defined in any module, then that shouldn't be an issue, since they are/should be isolated."

The core is a module, right? At least in concept? And shouldn't you be able to modify something in another module when you need to, to fix bugs etc.? (Not that you'd be able to fix bugs in the core, necessarily. :-p )

---

"There's a bug with it right now, but I should be able to fix it. The environments get created properly, but when doing the actual lookup, it thinks the value is undefined when it isn't. That's pretty normal, since the interpreter is still missing lots of functionality."

It sounds almost like you shallowly copy the outer scope when you create a local scope, which means you miss out on later definitions made in the outer scope. Is this right? It's a pretty wild guess, and even if it is right, then I'm interested to know how you're shallowly copying arbitrary environment types. :-p

(EDIT: Oh, you've fixed this already. I'm just late to the game.)

-----

1 point by Pauan 5367 days ago | link

Yes and yes. But there's supposed to be a clean separation between modules. If you assign to a variable, it assigns it in your module. If you want to assign to a different module, you need to be explicit, like so:

  (import foo "foo.arc")
  (= foo!bar 'qux)
So in order to assign to the core module, you would need something like built-ins* to access it:

  built-ins*!ssyntax-rules*  ; modify ssyntax globally
Assuming I choose the name built-ins* of course.

"then I'm interested to know how you're shallowly copying arbitrary environment types. :-p"

Environments are just wrappers around Python's dict type, so I can use the copy() method to make a shallow copy. I might need to make my own custom copy function later, though. Arc tables are just annotated dicts, by the way. But Python lets me modify how classes behave, so for instance I should be able to get cons cells to work in a dict, even though they're mutable (and thus not normally allowed).

-----

1 point by rocketnia 5367 days ago | link

"If you want to assign to a different module, you need to be explicit, like so:"

Ah, I don't know why I didn't see that. ^_^

---

"Environments are just wrappers around Python's dict type, so I can use the copy() method to make a shallow copy. I might need to make my own custom copy function later, though."

I was harking back to "What is an environment? Anything that supports get/set on key/value pairs." Would it also need to support a copy function?

-----

1 point by Pauan 5367 days ago | link

Hm... it shouldn't, no. Only global_env is copied, so Arc types wouldn't need to worry about copying. In the hypothetical case that I would need to copy them, I could coerce them into a table and then do a copy on that. This could all be done transparently, so Arc code wouldn't need to do anything.

global_env is treated specially because it's the base for all other modules: it contains the built-in functions that modules cannot create (at least not in Arc), and also the core functions in arc.arc for convenience.

There are other strategies I could use as well, like special-casing (assign) or similar, but I figured a shallow copy would be easiest.

-----

1 point by Pauan 5367 days ago | link

"...and then you have to somehow make the same name have different meanings in different contexts (hard to do on top of official Arc, thanks to Racket's opaque namespaces, but very possible to do from scratch)."

I'm curious, how are Racket's namespaces opaque?

-----

1 point by rocketnia 5366 days ago | link

That's a good question. It's probably unfair to Racket to just call the namespaces opaque without an explanation. In fact, I'm not sure there isn't a way to do what I want to with Racket namespaces, and even if there weren't, there might be a way to configure Racket code to use a completely different kind of name resolution.

With a Racket namespace, you can get and set the values of variables, you can reset variables to their undefined state, and you can get the list of variables, so it's very much like a hash table. But suppose I want to have two namespaces which share some variable bindings. I may have been able to get and set the values of variables, but it turns out I can't manipulate the bindings they're (in my mind) stored in. I can't create two namespaces which share a binding.

Fortunately, Racket namespaces can inherit from modules, which sounds pretty much exactly what I want, and there's also the "racket/package" module, which may or may not act as an alternative (for all I know). However, you can't just create modules or packages dynamically; you have to define them with particular names and then get them back out of those names somehow. I haven't quite been able to get anything nifty to work, maybe because of some issue with phase levels.

-----

1 point by Pauan 5366 days ago | link

So... you want two different namespaces that inherit from the same namespace? Easy in PyArc (or Python):

  (= shared (new-namespace))
  (new-namespace shared)
  (new-namespace shared)

-----

1 point by rocketnia 5366 days ago | link

I did say "very possible to do from scratch." PyArc's behavior doesn't help in pg's Arc. ^^;

-----

1 point by Pauan 5366 days ago | link

Yeah, that's true. It's one of the (few?) benefits of writing the interpreter from scratch, rather than piggybacking on an existing implementation.

-----

2 points by rocketnia 5366 days ago | link

The less an interpreter piggybacks, the less will be left over to implement when porting the interpreter to another platform. That benefit's enough by itself, IMO. ^_^

But then I tend to care more about the work I'll need to do in the future than the work I do today. When my priorities are the other way around, like for fun-size DSLs that'll hopefully never need porting, being able to piggyback is nice.

-----

1 point by aw 5367 days ago | link

The only major disadvantage is that it works based on the name, not the value.

Is this because you're still thinking in terms of creating a new data type that would notify you of changes?

My thought was at the beginning or read, or at the beginning of eval (whichever worked better), it would check whether ssyntax-rules* had the same contents as the last time it looked.

Now you don't have any special data types or magic going on, just ordinary variables and values.

-----

1 point by Pauan 5367 days ago | link

My idea for hacking eval was that for (assign), it would check if it was assigning to ssyntax-rules* and if so, do something. In other words, implement the hook in eval rather than as a special data type.

That avoids the issue, because it works based on the name rather than the value. If I implemented ssyntax-rules* as a special data type, it would work based on the value, rather than the name. But this way, it works no matter what is assigned to ssyntax-rules*

P.S. Checking ssyntax-rules* at the beginning of each eval sounds dreadfully slow. Eval is called a lot, after all.

-----

1 point by aw 5367 days ago | link

Checking ssyntax-rules at the beginning of each eval sounds dreadfully slow.

Maybe. But then, eval invokes the Arc compiler, and you may find that comparing two lists is negligible in comparison.

-----

1 point by Pauan 5367 days ago | link

I don't have an Arc compiler. Everything is evaled in my interpreter, for better or worse. It can be made fast later, I just want to get it working right now.

-----

1 point by rocketnia 5367 days ago | link

It's not just a matter of performance. That's the kind of semantic difference that'll make it really hard for people with existing Arc code to use their code on your interpreter; in a sense, it won't actually "work" in the sense of being a working Arc implementation. Are you concerned about that?

No need to be; it could very well become a language that's better than Arc (and if modules are in the core, that's definitely a possibility ^_- ).

-----

1 point by Pauan 5367 days ago | link

I'm not terribly concerned about existing code. I would like existing code to work, but it's not a huge priority as in "it has to work no matter what." I've even toyed around with the idea of changing the default ssyntax, which would definitely break code that relies on Arc's current ssyntax.

I'm curious, though, what's the semantic difference? In PyArc, when eval sees a function, it calls apply on it. Apply then calls eval on all of the function's arguments, and then calls the function. How does MzArc differ significantly in that regard?

-----

1 point by rocketnia 5367 days ago | link

In MzArc (RacketArc?), 'eval is only called once per top-level command (unless it's called explicitly). If you define a function that uses a macro and then redefine the macro, it's too late; the function's code has already been expanded.

I rely on this behavior in Lathe, where I reduce the surface area of my libraries by defining only a few global variables to be macros that act as namespaces, and then replacing those global variables with their old values when I'm done. In an Arc that expands macros at run time, like old versions of Jarc, these namespace macros just don't work at all, since the variables' meanings change before they're used.

I also rely on it in my Penknife reimplementation. I mentioned (http://arclanguage.org/item?id=14004) I was using a DSL to help me in my pursuit to define a function to crank out new Penknife core environments from scratch. The DSL is based on a macro 'pkcb, short for "Penknife core binding," such that pkcb.binding-get expands to something like gs1234-core-binding-get and also pushes the name "binding-get" onto a global list of dependencies.

There's also a global list of Arc expressions to execute as part of the core constructor function. Adding to the list of expressions invalidates the core constructor function.

Whenever you call the core constructor, if it's invalid, it redefines itself (actually, it redefines a helper function) by evaluating code that has a big (with ...) inside to bind all those 'pkcb variables. But actually, I actually don't have a clue what 'pkcb variables to put in the (with ...) at that point. Fortunately, thanks to Arc's macro semantics, the act of defining triggers the 'pkcb macro forms, and it builds a list telling me exactly what I need to know. Then I define the core constructor again using that list in the (with ...).

I do all this just to save a little hash-table lookup, which I could have done with (or= pkcb!binding-get (pk-make-binding)). :-p If that's not important to you, I'm not surprised. It's hardly important to me, especially since it's a hash table of constant size; I mainly just like the thought of constructing the bindings eagerly and ditching the hash table altogether when I'm done.

So that example falls flat, and since you're taking care of namespaces, my other example falls flat too. I've at least established the semantic difference, I hope.

-----

1 point by Pauan 5367 days ago | link

Actually, PyArc expands macros at read-time and run-time, as needed. You can even use (apply) on macros.

If that's what aw meant by "Arc compiler", then I misunderstood. I took it to mean "compiling Arc code into a different language, such as assembly/C/whatever"

Given what aw said, though, eval doesn't invoke the Arc compiler... once read-time is over, the data structure is in memory so eval can just use that; it doesn't need to do any more parsing.

So, I'm not sure what aw meant by "eval invoking the Arc compiler."

-----

1 point by rocketnia 5367 days ago | link

Actually, PyArc expands macros at read-time and run-time, as needed.

Ah, okay. The semantic differences may not exist then. ^_^

In ac.scm, the eponymous function 'ac takes care of turning an Arc expression to a Racket expression. Arc's 'eval invokes 'ac and then call's Racket's 'eval on the result, so it is actually a matter of compiling Arc into another language. As for PyArc, in some sense you are compiling the code into your own internal representation format, so it may still make sense to call it a compile phase. The word compile is just too general. XD

-----

1 point by Pauan 5367 days ago | link

Then in that case, I think my previous answer is correct: PyArc does not have an Arc compiler, only an interpreter.

What you referred to as compile phase I refer to as read-time, since I think that's more specific. In any case, everything passes through read/eval, which is actually exposed to Arc, unlike MzArc where eval is just a wrapper.

Thus, implementing an additional check in eval could add up, if I'm not careful. So I think I'll add a hook to (assign) instead, that way it can be really fast.

-----

1 point by rocketnia 5367 days ago | link

"What you referred to as compile phase I refer to as read-time, since I think that's more specific."

I don't know about calling it read-time. Arc being homoiconic and all, there's a phase where a value is read into memory from a character stream, and there's a phase where a value is converted into an executable format, expanding all its macro forms (and in traditional Arcs, expanding ssyntax).

Read time is kinda taken, unless (read) really does expand macros. If compile time isn't specific enough for the other phase--and I agree it isn't--how about calling it expansion time?

---

"In any case, everything passes through read/eval, which is actually exposed to Arc, unlike MzArc where eval is just a wrapper."

I don't know what consequences to expect from that, but I like it. It seems very appropriate for an interpreter. ^^

---

"Thus, implementing an additional check in eval could add up, if I'm not careful."

I see what you mean.

-----

1 point by Pauan 5367 days ago | link

Read actually does expand macros. At least for now. In other words, it does the "read characters into memory" and "convert into executable format" phases at the same time.

Calling it expansion time isn't really correct because it doesn't always expand; sometimes it just takes data like 10 or (foo bar) and converts it into the equivalent data structure.

---

"I don't know what consequences to expect from that, but I like it. It seems very appropriate for an interpreter. ^^"

Well, it means you can overwrite them in Arc, possibly to implement defcall or similar, as opposed to in MzArc where they're hidden.

-----

1 point by shader 5367 days ago | link

Maybe we need an official jargon file for arc, given that we're having a lot of confusion over things like "compile time", what to call pg's original arc, etc.

I realize that there was an attempt at starting a wiki, but it requires permission to edit and is somewhat different from most wikis I'm familiar with. Maybe a more open data platform would be more accessible, such as wikia? I would quickly set one up there if anyone else is interested (and aw isn't too offended ;)

-----

2 points by aw 5367 days ago | link

Well, http://sites.google.com/site/arclanguagewiki/ is up, running, works, is available for editing for anyone who want to edit it, and I've been planning to continue to contribute to it myself. So... certainly anyone can post material anywhere they want, including a different wiki platform if they like it better, and if it's something useful to Arc we'll include links to it just as we include links to other useful material, whether published in a wiki or a blog post or whatever... but I personally would be looking for a more definite reason to post somewhere else myself.

Is there something specific you don't like about Google Sites?

Some of the things I was looking for:

- no markdown. I despise markdown syntax (when writing about code) with the myriad special syntax that I can never figure out how to escape.

- a way to plug in code examples from my upcoming "pastebin for examples" site, which I hope I'll be able to create a Google gadget to do.

- a clean layout without a lot of junk gunking up the page

- simple and easy to use

-----

1 point by Pauan 5367 days ago | link

I propose pgArc, personally. Short, simple, and unambiguous.

-----

2 points by shader 5366 days ago | link

Traditionally we've used "vanilla" to refer to the main branch, but then as a community we don't really care much for tradition anyway :P

-----

1 point by aw 5367 days ago | link

What's the semantic difference?

-----


If the variable's name is nil, all kinds of things stop working

Can you give an example? Are you trying to use nil as a variable name in Arc, or simply to have a data structure representing variables where some of those variables are named nil?

-----

1 point by shader 5366 days ago | link

I'm creating a compiler for a class, and I've been representing variable names in the ast with symbols. I could use strings instead, and may end up doing so, but symbols seemed a more elegant solution.

-----

1 point by aw 5368 days ago | link | parent | on: www to non-www redirect in svr.arc

With the svr-misc1 patch the "Host" header will be available in the request. For example, if I'm browsing to www.example.com:8080/foo/bar, then

  (alref req!headers "Host")
will be "www.example.com:8080".

I can then extend respond to take action for a particular host

  (extend respond (req) (begins (alref req!headers "Host") "www.")
    ...)
such as by issuing a redirect, something like:

  (extend respond (req) (begins (alref req!headers "Host") "www.")
    (prn "HTTP/1.0 302 Moved")
    (prn "Location: example.com/foo/bar")
    (prn))
sorry I haven't tested this; you'll want to look at what you actually get with a tool like curl to make sure that it's right.

-----

1 point by aw 5366 days ago | link

correction: the "Location" header should include the full URL including the http: or https:. Also you may want a 301 "moved permanently"...:

  HTTP/1.0 301 Moved Permanently
  Location: http://www.example.org/foo/bar

-----

1 point by markkat 5368 days ago | link

Thanks aw. I'll do my best to take it from there. It's appreciated. I've so little experience with this.

-----

2 points by aw 5368 days ago | link

Do post again as soon as you run into any problems or get stuck anywhere. (It will not only help you, but also the people who come after you who need the same thing you do).

-----

1 point by markkat 5367 days ago | link

Will do. It might take a a few days to get to this, but I'll post my problems, and hopefully, progress. :)

-----

More