All of the solutions so far have been interesting, but I was thinking more of being able to undo any change to any variable, instead of just the few variables declared as parameters.
The idea would be something like this:
(with x 5 y 4
(w/undo
(++ x) -> x=6
(undo x) -> x returns to 5
(= y (++ x)) -> x = y = 6
(side-efn x y) -> x and y are modified
(undo) -> x = y = 6 again
(reset))) -> x = 5, y = 4
It seems like being able to undo side effects caused by function calls to arbitrary variables would require overloading = (or more likely sref).
Maybe if you did that you could replace variables that are modified with objects that when evaluated return their current value, but which can undo and maybe even redo by changing the return value to an earlier or later value on the stack. They should also be added to a list of modified variables owned by the w/undo macro, so that it can reset all of their values, and also commit their final value when the block is completed.
Would this even be possible? I'm not sure that it would be, since I think arc uses lexical scoping, which means that an arbitrary function call would use its own context's definition of sref, and thus wouldn't pay attention if we redefined it. Maybe since sref is a global variable, it could be replaced and then reset by w/undo? Meta w/undo!
Anyway, this concept of making variables able to remember their past and return previous values reminds me of lazy evaluation and memoization. Maybe their's some sort of connection that could be used to unify them simply, and add them to arc as a unit?
(mac side-efn (x y)
; NOTE: This form is only intended to be used in the case where x and y are
; raw symbols, as they are in forms like (side-efn foo bar).
`(= ,x (- ,y ,x) ; temp = old y - old x
,y (- ,y ,x) ; new y = old x (calculated as old y - temp)
,x (+ ,y ,x))) ; new x = old y (calculated as old x + temp)
Should 'undo reverse just one of these assignments or all three? Should it make any difference if these assignments are three separate = forms within a (do ...) block?
A top-level undo could be nifty, but on the Arc top level even the macro environment gets to change from expression to expression, so it can be a bit different. To allow undo on a finer scale raises the question of just how to measure the previous transaction. How many assignments should be undone, and what if they're hidden away in a call to a function nobody likes to read? What about assignments for scopes whose dynamic extent has ended?
I'm not entirely sure. Now that I think about it, fine grained undo is really a different concept from global state restoration, and is commonly only done for variables that the programmer a) knows about and b) has in mind the place he would like to restore to.
This means that finer grained undo would be more accurately implemented with save and restore functions, as opposed to just an undo function.
The global undo should work the same was as previously stated, more like a reset, and going all the way back to the start point, which may or may not be the beginning of the w/undo block.
Maybe a better system would be two pairs of save and restore functions, one that works on individual symbols, and the other that redefines '= to store the old value in a table if it didn't already exist, so that reset could restore it.
If you're interested, I wrote a new version of ppr that's currently available on anarki, that properly displays most functions, macros, if statements, and quasiquotes. The goal was to get it to look as much like the original source code as possible, to work with my src function. It also makes it easy to define an indentation function for a particular form, if you don't like the default indentation.
I'll still be combing over pg's pprint.arc, though. By keeping the changes small, modular, and as close to the original as possible, I figure that (a) there's a smaller chance of bugs, as each change is more understandable; (b) the code will be compatible with vanilla Arc; and thus (c) even if just a few fixes are cherry-picked, we might actually get a decent ppr in the next release. If nothing else, it's kind of fun. :)
Indeed, that's the main reason I wrote my version of ppr ;) That's how I originally started my version - I was annoyed by the fact that pg's wouldn't properly print quasiquotes in my source code (and it's hard to debug macros that way) and just couldn't stop because it was so much fun.
And to be honest, I don't really know what the outstanding bugs are, or whether or not I ended up fixing them. It's been a while since I last looked at ppr.
Mind if I ask where the other 13/16 went? Since rand() % 10 can only output 0-10 there are only 10 possibilities, all of which are covered in your two ranges.
Either people frequently have problems with statistics as well, or random numbers are really complicated ;)
Well, I'm totally not an expert in "randomness assisted by computer", I just know it's something, like cryptography, where you should be careful before implementing your own solution.
And I am bad in maths, and statistics/probas are so tricky (at least for me), you are right.
However, a code demonstration (in Perl, sorry, for perf/convenience):
# We fake a bad `rand()' function by only taking
# values between 0 and 15.
$cnt{rand(15) % 10} += 1 for (0..1_000_000);
print $_, "\t", $cnt{$_}, "\n" for (sort keys %cnt);
What you said is correct; the only problem is that with the numbers you quoted, the total probability of rolling a number in the range [0,10) is 2/16 + 1/16 = 3/16 < 1 :) The numbers you meant to quote, as borne out by your data above, are a 10/16 = 5/8 chance to get a number in the range [0,5), and a 6/16 = 3/8 chance to get a number in the range [6,10). (Equivalently, a 12/16 = 3/4 chance to get a number in the range [0,5] and a 4/16 = 1/4 chance to get a number in the range [6,9], but since that doesn't partition the [0,10) range evenly, it's less instructive.)
There's an interesting bug with my len redefinition, and I was wondering if you guys might have some ideas.
Sometimes when I call len, after the new ppr has been automatically loaded by libs.arc, it produces an error. I have determined that was caused by the fact that len was only partially redefined. Apparently, the one recursive call to len inside the redefinition can sometimes refer to the old len. It doesn't seem to happen when I run the code from the repl.
Any ideas why that happens? Any suggestions for fixing it?
As long as they use the same symbol table, putting them into the same sig, src, and fns tables makes logical sense.
Realistically, src should be defined for '= (presuming you could discriminate for global symbols), and all of them redefined for every assignment macro that uses the global symbol table.
Unfortunately, it can't work for all def* forms, because some of them save their functions to different tables. For example, defop saves its functions to the srvops* table, so adding those functions to the 'sig table could result in name collisions.
Is it possible to make temporary changes to variables/hash tables? Suppose I wanted '= to save the assignment code "(= place value)" to a table using place as the key, but I only want that change to be valid for the current scope. For example
arc> (= a 42)
42
arc> src.a
(= a 42)
arc>(let b 16
(= a b)
src.a)
(= a b)
arc> src.a
(= a 42)
Is that possible? The idea would be to do the same thing for def and mac so that you can know the current definition of any symbol.
whoops, good point. I was intending to make a a new local variable and redefine it using '= to a new value. In that case, the value of a would obviously be local, so redefining its source in the global table would obviously not make sense.
The problem with save/restore is that it isn't thread safe. Hmmm.
The only problem with using standard array notation for arc is that [] are already tied up for anonymous functions.
However, I think that too much syntactic sugar like this will end up damaging arc. Partly due to aesthetic reasons, but mostly due to the fact that each user has a different view on aesthetics, and adding too much will just exacerbate the "everyone has their own version" problem.
hmm, come to think of it, I've never used my table literal syntax {...} myself in a program. I wrote it so that I could write out and read back tables, which I use all the time, but that would work just as well with some long syntax like (literal-table k1 v1 k2 v2 ...) which doesn't use up precious syntax characters like { or #{.
I understand that it should be slower with the double layer of conversion, but how much slower have you found it to be in practice?
Could you give us some comparisons? It would be great if we could use the latest version of scheme, even if it comes at a modest performance penalty. Arc isn't exactly designed with performance in mind as it is, so as long as it doesn't become unusable this could be quite helpful.
Right now getting to the "arc>" prompt is 6.2 times slower, but that could be from not loading the Arc compiler as a module yet or from using the r5rs library.
You could push it onto Anarki. Other than that, there isn't much you can do to get it into a public arc.arc. It seems that if a function isn't used in the forum code, pg doesn't include it in the official release.
I think the reason that case expects just cons instead of 'cons is that it automatically quotes the case arguments. It's been somewhat annoying for me too, because you can't use expressions for cases - unless you're matching against a list that contains code. ^^
Are there any plans to add the +=, -=, etc. operators? Or maybe making symbols that end in = syntactic sugar for [= _ (fn _)]? That would be a simple and powerful way to implement equivalent functionality.
Which naturally leads to questions about symbol macros, reader macros and then readers ;) But I won't ask those just yet.
Maybe we could just have separate ssyntax rules for when the character is at the beginning, middle or end the symbol, or by itself? That and multi-character ssyntaxes would probably go a long way towards symbol macros.
I guess I never noticed that the existing ++ operator could take a second argument, sorry.
However, the *= and /= operators are still very useful, and it would be very cool if a simple ssyntax could make it general to all single argument functions, or at least functions in which the first argument is the one being modified/extended. Good examples would be += as applies to alists and tables, and things like cut or sort.
Is there a reason that you can't append an atom to a list with '+ or '++?