The Racket reader reads a seven-element list there, not a mutable hash.
Looks like you're proposing to use a two-stage serialization format. One stage is `read` and `write`, and the other is `serialize` and `deserialize`. At the level of language design, what's the point of designing it this way? (Why does Racket design it this way, anyway?)
I can see not wanting to have cyclic syntax or syntax-with-sharing in the core language just because it's a pain in the neck to parse and another pain in the neck to interpret. Maybe that's reason enough to have a separate `racket/serialize` library.
But isn't the main issue here that Arc's `read` creates immutable tables when the rest of the language only deals with mutable ones? The mutable tables go through `write` just fine, but `read` doesn't read the same kind of value that was written out. If and when this situation is improved, I don't see where `racket/serialize` would come into play.
"It could just be that the tables are not serialised as something Racket reads as mutable hashes when deserialising it back again?"
I'm pretty sure the Racket reader never reads a mutable hash, but that it's possible for a custom reader extension to do it.
Some of Racket's approach to module encapsulation depends on syntax objects being deeply immutable. In particular, a module can export a macro that expands to (set! my-private-module-binding 20) but which uses `syntax-protect` so that the client of that macro can't use the `my-private-module-binding` identifier for any other purpose. If the lists constituting a program's syntax were usually mutable, then it would be hard to stop the client from just mutating that expansion to make something like (list my-private-module-binding 20), giving it access to bindings that were meant to be private.
I think this is why Racket's `read-syntax` creates immutable data. As for why `read` does it too, I think it's just a case of `read` being a relatively unused feature in Racket. They don't have many use cases for `read` that they wouldn't rather use `read-syntax` for, so they don't usually have reasons for the behavior of `read` to diverge from the behavior of `read-syntax`.
All this being said, they could pretty easily add built-in syntaxes for mutable hashes, but I think it just hasn't come up. Who ever really wants to read a mutable value? In Racket, where immutable values are well-supported by the core libraries, you aren't gonna need it. In the rare case you want it, it's easy enough to do a deep traversal to build the mutable structure you're looking for (like my example `correcting-arc-read` does).
It only comes up as a particular problem in Arc. Arc's language design doesn't account for the existence of immutable values at all, so working around them when they appear can be a bit quirky.
I have an old fork (https://github.com/akkartik/arc) that has an extensible generic pair of functions called serialize and unserialize which emit not just the value but also tagged with its type. read and write are built atop them.
> I assume the problem with reading tables using `read` is that Racket's reader constructs immutable hashes.
Racket also has mutable hashes created using `make-hash` rather than `hash`. It could just be that the tables are not serialised as something Racket reads as mutable hashes when deserialising it back again?
Even if you merge `writefile` and `save-table` like this, but not `readfile1` and `read-table`, then people still need to know, at development time, what type of data is in the file in order to read it, so they might as well use a type-specific way to write it as well. Unfortunately, merging `readfile1` and `read-table` isn't really possible, since their serialized representations overlap; they can't reconstitute information that was never written to the file to begin with.
From a bigger-picture point of view, this seems like it would become a non-issue once Arc had its own reader. I assume the problem with reading tables using `read` is that Racket's reader constructs immutable hashes. An Arc-specific reader would naturally construct Arc's mutable tables instead.
Doesn't Racket's reader give us similar problems in that it reads immutable strings and cons cells, too? So these problems could all be approached as a single project.
In the short term, it's not a project that would need a whole new reader. It could just be an adaptation of Racket's existing reader... something like this:
(define (correcting-arc-read in)
(let loop ([result (read in)])
; TODO: See if this should construct a Racket mutable cons cell
; instead (`mcons`). Right now this just creates an immutable
; one, which should be fine since Arc uses an unsafe technique
; to mutate those.
[(cons a b) (cons (loop a) (loop b))]
[ (? hash?)
(match-lambda [(cons k v) (cons (loop k) (loop v))])
[ (? string?)
; We construct a new mutable string with the same content as
(substring result 0)]
; We handle tagged values, which are represented as mutable
; Racket vectors.
[(? vector?) (list->vector (map loop (vector->list result)))]
; We handle various atomic values. (TODO: Add more of these
; cases until we've accounted for every writable type Arc
; supports. Alternatively, just make this a catch-all
; `[_ result]`.)
[(? number?) result]
[(? symbol?) result])))
Writing Arc's `queue` type might be tricky, since that representation relies on sharing. It's possible queues (and other tagged values in general) should have a customized read and write behavior.
As i4cu pointed out, it's because write can't handle tables. There _is_ a save-list, but it's called writefile, and it works not just on lists but most primitive datatypes. However, it doesn't work on tables.
The naming isn't very consistent here. Arguably writefile should be called save-file. We should preserve the write prefix for operations that write to opened Racket file ports (https://docs.racket-lang.org/reference/file-ports.html) and save for operations that write to provided filenames. That convention is followed by save-table and write-table.
And of course we could simplify names still further if we took i4cu's suggestion and had write work with tables.
The article uses patterns as an example of a non-expression syntax. Another would be what might be called access expressions that setf can use. And CL does have defsetf (as well as define-modify-macro and such) to extend it (as does Arc with = and defset).
Note that I say "access expression", though—it's designed such that (setf (car x) ...) will modify what (car x) subsequently would return (assuming that "..." doesn't rebind x). But there's also a symmetry in the pattern-matching stuff: if (cons x y) created an object, then the pattern (cons x y) will destructure the object and bind x and y to what they originally were. Perhaps that's simply good practice in designing new syntaxes.
> If that is something you have published, it'd be fun to see, whether it's finished or not.
I actually tried to look it out the other day during this conv, but it's buried somewhere unavailable right now. If I find/get to it I'll post.
> That multitude of libraries with similar purpose may be useful in some cases, sure, but also potentially a bit overwhelming for beginners, so I guess that's why I found it easier to get started with Racket.
Agreed. Navigating the volume libraries and the options available is a real pain in the beginning, but once you get past that, then it's not bad at all. At the same time, take a look at the quality of Clojure's Redis Carmine Library vs. Racket's Redis Libraries. Miles apart.
> Can these servlets live on a separate server so that the data can be shared between web servers?
Probably. I assume that serializable continuations from stateless servlets can just be stored wherever, like in Redis or something, instead of in the memory of one server.
> I ported HN to Clojure
If that is something you have published, it'd be fun to see, whether it's finished or not.
> Uh oh, you're getting me interested in Racket now.
My impression is that Clojure is faster, less verbose partly due to clever syntax and provides more immutable data structures than Racket. But when it comes to documentation and error messages, I find Racket more coherent and comprehensible.
Say, if I wanted to connect to a SQL databse, with Racket I'd use the DB module, end of discussion. But with Clojure there's Korma, ClojureQL, Persist, HoneySQL, Yesql, a JDBC wrapper from Clojure contrib, SQLingvo, oj, Suricatta, aggregate, Hyperion, HugSQL, and probably a few more. That multitude of libraries with similar purpose may be useful in some cases, sure, but also potentially a bit overwhelming for beginners, so I guess that's why I found it easier to get started with Racket.
When I was referring to load balancing and centralizing the data I was referring to many web servers sharing a centralized/external source for auth/session data.
I'm unfamiliar with racket's web server 'servlets'. The docs are little unclear (at least to me). Can these servlets live on a separate server so that the data can be shared between web servers? I'm guessing that was/is not a requirement for you, but I'm just interested in knowing if that's how it can work.
Uh oh, you're getting me interested in Racket now. I can't have that... I have too many projects :)
edit: I guess at the end of the day these servlets are web-servers right, so you can, even if you have to do it over http and build an api.
> I did the same thing, about 6 or 7 years ago, that you're doing now. I ported HN to Clojure (which is actually how I learned Clojure).
> I needed to centralize the data for the authentication and fnid session info. I think Arc calls them fnids... You probably know better than I do now, but Arc has all this code to expire these session fnids and so, for me, Redis was just a good fit for that task.
The Racket web server is quite "batteries included" and comes with these different managers for dealing with expiration of sessions/continuations, such as the LRU manager:
> The memory limit is set to `memory-threshold` bytes. Continuations start with 24 life points. Life points are deducted at the rate of one every 10 minutes, or one every 5 seconds when the memory limit is exceeded. Hence the maximum life time for a continuation is 4 hours, and the minimum is 2 minutes.
> If the load on the server spikes—as indicated by memory usage—the server will quickly expire continuations, until the memory is back under control. If the load stays low, it will still efficiently expire old continuations.
> PLOP (PLace Oriented Programming), which Datalog does address.
Yeah, I was thinking more along the lines that Datomic has built-in functionality to address the caching, cache eviction, and indexing that goes along with all that data accumulation. But you're correct, DataLog does accumulate facts.
> Interesting. Just checked news.arc, and yes `initload*` is set to 15000.
I did the same thing, about 6 or 7 years ago, that you're doing now. I ported HN to Clojure (which is actually how I learned Clojure). If memory serves me correctly when I was doing the work I realized I needed a real DB if I wanted to support load balancing. i.e. I needed to centralize the data for the authentication and fnid session info. I think Arc calls them fnids... You probably know better than I do now, but Arc has all this code to expire these session fnids and so, for me, Redis was just a good fit for that task.
Anyways, I'll be sure to take a look at the final result of your work.
> Datomic uses DataLog as part of its query language, but that's pretty much where the comparison should end. Things like "treating the database as a value", and features such as data accretion that Rich talks about have nothing to do with DataLog.
I'm not sure I totally agree with this. I think that apart from talking about the design of Datomic, he also has a more general point against what he calls PLOP (PLace Oriented Programming), which Datalog does address.
For example in plain Racket a value is lost if something else is put in its place:
Hickey is also mentioning how git doesn't do PLOP in that it doesn't throw out your commit history (without you asking it to do so).
> The reasons I mention Redis is that the HN app is very well suited to it. HN only keeps 'x' amount of data in memory. And in Redis the data lives in memory. Also Redis allows you to set expiry times on data for auto eviction .
Interesting. Just checked news.arc, and yes `initload*` is set to 15000. Interesting idea from Redis with expiry times. I'll check it out. I hadn't considered the scenario of storing text enough to max out on memory, because it would probably be premature optimisation, but good to keep in mind. I'd like to give Redis/Rackdis a try; thanks for the suggestion. I've been hosting an Etherpad Lite instance, and Redis was painless to setup.
> I'm pointing this out because it seems to me that you're doing (or are going to be doing) a lot of work that may not be worth it for what you're trying to accomplish.
Yes, my priorities here are definitely to make the code as brief and simple as possible, and to not have to do to much work. With plain Datalog it's very little work to timestamp a fact, and it's also kind of necessary, e.g. to figure out which fact is most recent, when previous facts are not removed. I'm just trying to get the gist of Hickey's ideas here.
Datomic uses DataLog as part of its query language, but that's pretty much where the comparison should end. Things like "treating the database as a value", and features such as data accretion that Rich talks about have nothing to do with DataLog. They're features of Datomic. So for example when you mention never retracting data, well your data size is going to continuously grow unless you write your own data management layer on top. Datomic, on the other hand, does this for you. When you want to query the database over time, then you're going to need to store time intervals for all of your data and incorporate that into each query. Where as in Datomic (which has a time log) you can pass in the DB itself as a value (with an associated time interval) and Datomic will make sure your queries are working against the dataset that accounts for the time interval.
I'm pointing this out because it seems to me that you're doing (or are going to be doing) a lot of work that may not be worth it for what you're trying to accomplish.
> I'm not too familiar with NoSQL, but my impression is that they are all about speed and scalability of data storage.
Yes and No. Often speed can be a feature Nosql dbs advertise, but really, for me anyway, it's about flexibility and ease of use. Traditional RDBMS, for example, require creating schemas. Many Nosql databases don't require a schema at all which makes it easier to use and more flexible to change. Nosql's are often a key-value store so it can be really easy to take a hash-map or table of data from your code and just dump it into an nosql datastore and be able to query it.
My personal favourite is Redis and it might be worth considering for your app.
- store a value under a key 
- store table data 
- store values in a set  (which allows intersection/difference queries)
- store values in a sorted-set  (which allows you query by some numerical value like timestamp)
- use it to manage relationships 
The reasons I mention Redis is that the HN app is very well suited to it. HN only keeps 'x' amount of data in memory. And in Redis the data lives in memory. Also Redis allows you to set expiry times on data for auto eviction . And Redis also supports ordered lists  which can make it useful for lisp based languages.
However it's not embedded. And if that's a requirement I'd almost suggest you move away from Racket and adopt a language that has more options for embeddable databases. I guess if you're willing to roll your own (and it looks like you may be) then that's awesome too.
But in case you decide otherwise... The library I use is Redis Carmine , but there are Racket clients .
> So where will you're data be? In a local data structure?
Yes, I think. The database just stored in memory but it can be serialized and saved to the disk using `write-theory` and loaded `read-theory`. That is what I'm doing for now, and it's a very naive and inefficient to do a full database dump rather than just appending new data, and I presume it's particularly in this area where Datomic is way more optimised and well thought out.
> Also, I'm curious what made you choose a graph db. It seems like you're inheriting a lot of complexity and I'm wondering what the benefit is over a more traditional sql or nosql db.
Well, I did the initial work on the web app: creating user accounts, adding posts and replies, and then I got to data storage. Initially I did a News-style flat-file database, just saving data as lists in files, that are then loaded into memory when the program starts. It mostly worked but also felt a bit complicated, and I thought that perhaps I should just use a proper database?
What I like about news.arc is that you can just launch it without any configuration, so MySQL and PostGreSQL were out of the question, and I started reading a bit about SQLite. But I've also had this fascination with logic programming, from what people are posting here, and from reading a bit of The Reasoned Schemer, and I watched some of those Rich Hickey talks again, where he talks about Datalog, which happens to be available for Racket.
There are just some things that are incredibly simple in declarative/logic programming. For example, if you have facts about stories being `parent` of their replies, then it's simple to just define the `ancestor` relation, and when you have the `ancestor` relation, you automagically get `descendants` without having to write any code, because it's just the inverse of `ancestor`:
(! (:- (ancestor A B)
(parent A B)))
(! (:- (ancestor A B)
(parent A C)
(ancestor C B)))
But, I've also bumped into some questions - more practical than theorical - and that is why it's interested hearing about your experience with Datomic, and why I'm asking here.
So, SQLite is still on the table. I'm not too familiar with NoSQL, but my impression is that they are all about speed and scalability of data storage. I haven't used MongoDB but isn't it essentially just like storing JSON in a file, except faster? It would be interesting if any of those could be used in conjunction with Datalog though, if don't add too complexity for the sake of increased speed.
Yeah, Datomic doesn't expect the relationships to be stored in the DB. It stores a bunch of indexes for you and it has a great query language, but that's it.
So where will you're data be? In a local data structure? I read your racket Datalog link, but it doesn't show any details for the database side (i.e. durability etc.) even though it's labelled a database.
Also, I'm curious what made you choose a graph db. It seems like you're inheriting a lot of complexity and I'm wondering what the benefit is over a more traditional sql or nosql db.
> Is this irrelevant-word example a real feature you're building into the app or a contrived example to understand Racket DataLog DB use?
It's a simplified and slightly contrived illustration of an issue in this pre-alpha code I haven't published yet, perhaps just because I haven't thought of a name for the project yet. But yes, let's assume it's a real feature for now.
> 3. How are you going to handle title changes.
I like Hickey's idea of accretion of data - with a timestamp! - and not forgetting previous facts. I think he's talking about it in The Database as a Value. So, a story could have a few different titles in the database, and the newest one is the one you get to see.
The thing is, it's easy to add a timestamp to facts as a way of not considering old facts without forgetting them, but not to relations. For example in Racket Datalog:
(! (voted "i4cu" 'up 134))
can easily be get a timestamp:
(! (voted "i4cu" 'up 134 1542151773))
but that would not really make sense in a relation like
(! (root A B) (ancestor A B) (parent null A))
So, I don't feel there's a need for ever retracting facts, because timestamps solve that. (Even when deleting something, you could just add the fact that is has been deleted.) But with relations (a.k.a. business logic) I think I will need to retract things, which is tricky because this logic is not only present in the code but also in the database. This was the problem I was asking about.
> 2. How are you going to make sure past stories gain the relationship to newly added irrelevant words?
Datalog queries reflect the current set of facts and relations, so the possible irrelevance of a story would not be stored anywhere, so it wouldn't need to be updated.
> 1. How are you going to remove the past relationships between stories and an irrelevant word that's getting removed?
Some as above. For example, in a place oriented program a story object could have a boolean attribute `irrelevant?`, whereas I'm sending a query every time this value is needed, so no stale `irrelevance` attributes are stored anywhere.
> After you get handle on these then what you really need to do provide an interface, from within the apps admin tools, to trigger the noted functionality. This way the business logic is in the app and it's modifying the data.
Yes, that is kind of there already, as in having functionality for changing titles. The `irrelevant` functionality is not there now.
Ok, I think I just need to work on getting this code publishable, because it might be easier to discuss tangible examples.
Is this irrelevant-word example a real feature you're building into the app or a contrived example to understand Racket DataLog DB use?
If it's a real feature, and I'm assuming it is. Then I'm also assuming that when a story is submitted you're parsing the title and adding the relationship to the current set of irrelevant-words that are stored.
So the question's are:
1. How are you going to remove the past relationships between stories and an irrelevant word that's getting removed? (looks like we've answered this).
2. How are you going to make sure past stories gain the relationship to newly added irrelevant words?
3. How are you going to handle title changes.
After you get handle on these then what you really need to do provide an interface, from within the apps admin tools, to trigger the noted functionality. This way the business logic is in the app and it's modifying the data.
> So if you decide that onion is no longer irrelevant then delete the entity 'irrelevant-word-001'. Which seems, at
least to me, better than making code pushes.
I can make sense of that when there's just one instance of this app running. Yet imagine the scenario where the web app has been published, and suddenly other people are running this web app. If the business logic is then changed, somehow I'd have to tell those people: "Oh btw, when you're running `git pull` next time, then you just also gotta run this query to retract some of the old relations from the database."
Definitely not a problem for me yet, but I can just smell it coming. I could add those retractions to the code, but they would have to stay there indefinitely, because it's not possible to tell if those retractions have taken place on everyone's databases yet.
Maybe I'm over-thinking this.
> What's interesting (at least to me) is that Datomic has a function called 'retractEntity'
Looks like the one called `~` in Racket's Datalog.
So you would also need to remove the value 'irrelevant-word-001' from the above.
At least this is how I would do it in Datomic anyway.
What's interesting (at least to me) is that Datomic has a function called 'retractEntity'  which auto-magically removes all references of an entity in any value slot when you retract the entity. Man I love Datomic :)
> I'd never heard of Fulcro before. Given what is often emphasised about Clojure, at first glance at the Fulcro docs I'm a bit surprised how often they mention state and mutations:
Well things on the client side can be sometimes be mutable. No one gets around the fact the DOM is a mutable only object. But besides that, the Fulcro library has labelled one of their feature's a 'Mutation'. Which was probably a bad choice, but it has nothing to do with the immutability of the underlying cljs object that it uses for that "Mutation". You'll notice the example is using 'swap!'. That means it's modifying an atom; Where an atom is an interface to make changes to the immutable object it holds. So really 'swap!' takes the change request, constructs a new version the original thing held in the atom, with changes, then 'swap's it with the original item inside the atom. The original thing was never changed (no changes to existing slots in memory). Hence clojure's things are immutable, and they are in Fulcro too, accept when changing the DOM tree.
As for state that's mentioned all the time in Clojure :)
That Compojure example does look quite familiar and Arc-like, and looks like it can handle multipart post requests too.
I'd never heard of Fulcro before. Given what is often emphasised about Clojure, at first glance at the Fulcro docs I'm a bit surprised how often they mention state and mutations:
> The other very common case is this: You’ve loaded something from the server, and you’d like to use it as the basis for form fields. In this case the data is already normalized in your state database, and you’ll need to work on it via a mutation.
Also, I'm too much into graceful degredation to ever go all out Cljs, unless it was for a phone app. But I find that it's often interesting to see how people do things in Clojure, even when not using that language, so I'll be giving those Fulcro videos a look.
> I can't speak for Datalog, but I've used Datomic.
> The relationships are made by storing an entity id into the value slot of another fact. Thus the model is both flat (being a list of facts) and hierarchical (they can point to each other). It pretty much becomes a graph database. Is that what you mean?
Yes, exactly! What I worry about (because I'm fairly new to logic programming) is whether it could potentially be difficult to update a program where storage of business logic and storage of data aren't separated?
If we assume we have a Hacker News web app where we have a fact: One day Alice submits a story with the title "How to peel onions"; and we are thinking "Why on earth did she post that here?!?" So we add this relation to our code: A story is `irrelevant` if it has the word "onion" in the title. Then, the next day we get another fact: Now Bob has submitted a story called "How onion routing works". This new story by Bob then makes us reconsider our definition of `irrelevant`.
In a typical imperative program we'd just edit the code and redefine the `irrelevant` predicate, and it would take effect next time we run the program (or instantly if we enter it at a repl). But here in our logic program we store this `irrelevant` relation in our graph database, so even though we have removed it from our code, it is still sitting there in the database along with all the facts, outside the reach and responsibility of git, or whichever VCS we're using.
Yes, so my question is: How do you practically deal with changes to business logic in logic programming where data storage and relation storage is one and the same? Perhaps Datomic just avoids this issue somehow? I may also be missing or misunderstanding something.
> What I find a bit tricky about Datalog is that relations are stored together with other facts, which in my mind feels a bit like storing code in a database, but maybe I just haven't wrapped my head around it yet.
I can't speak for Datalog, but I've used Datomic.
If it's the same, then a 'fact' is comprised of an entity (the id), + an attribute, + a value.
The relationships are made by storing an entity id into the value slot of another fact. Thus the model is both flat (being a list of facts) and hierarchical (they can point to each other). It pretty much becomes a graph database. Is that what you mean?