- 浏览: 30498 次
- 性别:
- 来自: 北京
最近访客 更多访客>>
文章分类
最新评论
-
leeamen:
废话连篇,
角色分析 -
shxiao:
从google doc上复制过来就变成这个样子了,真郁闷
JfreeChart学习
ADT in clojure
ADTs in Clojure
Today on the Clojure mailing list, a user asked:
"Is the concept of Abstract Data Types useful in Clojure?
If yes, how would you implement one?"
This is an important question because coding to interfaces and abstractions, rather than concrete data, is a key aspect to writing scalable programs.
My favorite treatment of ADTs is in Concepts, Techniques, and Models of Programming by Peter Van Roy and Seif Haridi. This book shows all the variations of ADTs: stateful and declarative (aka functional), bundled and unbundled, open and secure. I'm going to walk through a few of these variations in Clojure.
Let's consider a classic stack ADT. Since Clojure primarily emphasizes functional programming, I'm going to focus on functional implementations of this ADT, i.e., versions in which pushing and popping a stack are non-destructive and actually return a new stack.
Open unbundled
Now the whole point of an ADT is to separate the implementation from the interface, but for starters, let's implement a stack ADT using an underlying implementation of a list. In an "unbundled" implementation, the stack data is completely separate from the functions that operate on it. It is "open" because we make no attempt to hide the implementation details of the stack. We are simply going to trust the programmer to use our interface and ignore those details.
(defn stack-new [] nil)
(defn stack-push [s e] (cons e s))
(defn stack-top [s] (first s))
(defn stack-pop [s] (rest s))
(defn stack-empty? [s] (empty? s))
This open, unbundled implementation is probably the simplest way to code an ADT in Clojure. Because it is the simplest, this is probably the most common kind of coding you'll see in Clojure, and thus many would argue it is the most idiomatic. But just because it's the easiest to code doesn't mean it's the best.
The most obvious problem with an open implementation is that it is a leaky abstraction. In other words, client code can easily see that this stack is implemented as a list. And it would be all too easy to forget that this implementation is supposed to be hidden, and call a list function on our stack (e.g., count).
I have seen several comments on the clojure mailing list suggesting that it is inevitable that any ADT written in Clojure will be inherently leaky. But that is not the case...
Secure unbundled
The idea is to wrap the data structure in something that effectively locks it with a unique key, so it can only be unwrapped with the correct key. The only functions that know the key are the ones that serve as the interface for the ADT, so nothing else can inspect or tamper with the contents of the data structure. For optimum security you'd need to use cryptographic techniques, but for illustration purposes, I've simply used gensym to generate a unique key.
(let [security-key (gensym),
wrap (fn [x]
(fn [k]
(if (= security-key k) x (throw (new Exception))))),
unwrap (fn [x] (x security-key))]
(defn stack-new [] (wrap nil))
(defn stack-push [s e] (wrap (cons e (unwrap s))))
(defn stack-top [s] (first (unwrap s)))
(defn stack-pop [s] (if (stack-empty? s) nil
(wrap (rest (unwrap s)))))
(defn stack-empty? [s] (empty? (unwrap s))))
Notice how wrap and unwrap are local to these functions. If you play around with these stack functions, you'll see that the stacks you generate appear to the outside world as functions, and you need to pass it the right key to get at the innards. In this case, a gensym is relatively easy to forge, but you should get the idea. The innards are very well protected and there is essentially no way to manipulate the stack other than through the approved interface functions.
One problem with unbundled ADTs is that they lack polymorphism. If you have multiple stack implementations, each one will require its own set of interface functions with different names (or the same names in different namespaces). If you want to write a function that operates over any stack implementation, it will first require you to pass in some sort of map of the interface functions to use. Although this technique is relatively common in functional languages (especially in the ML family), it's a fairly clunky way to achieve polymorphism.
Secure bundled
The next logical step is to bundle the data along with the functions that know how to operate on it. This looks something like this:
(let [stack-object (fn stack-object [s]
(let [push (fn [e] (stack-object (cons e s))),
top (fn [] (first s)),
pop (fn [] (if (seq s) (stack-object (rest s)) nil)),
empty? (fn [] (empty? s))]
{:push push, :top top, :pop pop, :empty? empty?}))]
(defn stack-new [] (stack-object nil)))
In this implementation, a stack is an associative map of four interface functions. The actual data of the stack is hidden by lexical scoping so that only the interface functions can see it.
The big problem here is that the syntax for manipulating these bundled stacks is fairly unpleasant. For example,
(def stack1 (stack-new))
(def stack2 ((stack1 :push) 2))
(def stack2-top ((stack2 :top)))
You can improve the readability by providing interface functions that look like the unbundled version:
(defn stack-push [s e] ((s :push) e))
(defn stack-top [s] ((s :top)))
(defn stack-pop [s] ((s :pop)))
(defn stack-empty? [s] ((s :empty?)))
But there's a big difference between this and the unbundled version. Mainly, we get polymorphism. If you have two different concrete implementations of the ADT, your client code doesn't care. When you call stack-push, for example, the function looks up the correct push function for this given implementation in the bundle, i.e., (s :push) and calls that to do the pushing.
Secure bundled, another way
You may have noticed that the secure bundled approach is essentially the way that OO languages provide ADTs. Since Clojure interoperates with Java, it stands to reason that you should be able to use Java to implement your ADTs. Yes, you can.
The first step is to define your interface. You can either do this directly in Java, or use Clojure's ability to generate Java interfaces with something like this:
(gen-interface
:name adt.IStack
:methods [[push [Object] adt.IStack]
[top [] Object]
[pop [] adt.IStack]
[empty? [] Boolean]])
This code would need to be compiled with Clojure's compile function, and that requires a bit of tricky setting up of classpaths and namespaces, but it's doable. Then, the bundled ADT looks very similar to above:
(let [stack-object (fn stack-object [s]
(proxy [adt.IStack] []
(push [e] (stack-object (cons e s)))
(top [] (first s))
(pop [] (if (seq s) (rest s) nil))
(empty? [] (empty? s))))]
(defn stack-new [] (stack-object nil)))
You could also make it even more Java-like by using gen-class. The dot syntax is a bit cleaner than the map-based syntax of the previous version, for example:
(. s top)
(. s push 2)
And as before, you can clean it up even more by creating unbundled interface functions that actually call the bundled versions behind the scenes.
This is how most of the Clojure core interfaces and data structures are implemented, so in some sense, one could argue that this is the most idiomatic approach of all. However, I think many Clojurians would prefer to get away from the Java approach if there is a better way.
And bundled versions are not without their problems. The CTM book gives a great example of a collection ADT and the challenge of writing a union function on the two collections. The problem with the bundled version is that bundles must dispatch on one input, so the union function, when written from the perspective of the first collection, doesn't have access to the private parts of the second collection. A version of the ADT that can see the innards of both inputs might be considerably more efficient.
Secure unbundled, revisited
If the primary limitation of bundled ADT implementations is their single-dispatch nature, perhaps there is a way to go back to the secure unbundled version, but leverage Clojure's multimethods to gain a more sophisticated kind of polymorphism.
In this final, most sophisticated variant, I'm going to go ahead and show two concrete implementations, the list-based one we've been working with as well as a vector-based implementation, so you can see how the two implementations coexist side by side.
First, we define the interface as multimethods that dispatch on the type of stack. Notice how we don't include the constructor stack-new as part of the polymorphic interface. We'll need a separate constructor for each concrete implmentation.
(defmulti stack-push (fn [s e] (type s)))
(defmulti stack-top type)
(defmulti stack-pop type)
(defmulti stack-empty? type)
Since we're dispatching on type, we can't quite use the same wrapping representation as before (because functions can't have metadata). This time, the wrapped representation of the stack will be a map with one field (:wrapped-stack) and metadata with the appropriate type.
(let [security-key (gensym),
wrap (fn [x]
(with-meta
{:wrapped-stack
(fn [k]
(if (= security-key k) x (throw (new Exception))))}
{:type ::list-stack})),
unwrap (fn [x] ((x :wrapped-stack) security-key))]
(defn list-stack-new [] (wrap nil))
(defmethod stack-push ::list-stack [s e] (wrap (cons e (unwrap s))))
(defmethod stack-top ::list-stack [s] (first (unwrap s)))
(defmethod stack-pop ::list-stack [s] (if (stack-empty? s) nil
(wrap (rest (unwrap s)))))
(defmethod stack-empty? ::list-stack [s] (empty? (unwrap s))))
The vector-based version is almost the same:
(let [security-key (gensym),
wrap (fn [x]
(with-meta
{:wrapped-stack
(fn [k]
(if (= security-key k) x (throw (new Exception))))}
{:type ::vector-stack})),
unwrap (fn [x] ((x :wrapped-stack) security-key))]
(defn vector-stack-new [] (wrap []))
(defmethod stack-push ::vector-stack [s e] (wrap (conj (unwrap s) e)))
(defmethod stack-top ::vector-stack [s] (peek (unwrap s)))
(defmethod stack-pop ::vector-stack [s] (if (stack-empty? s) nil
(wrap (pop (unwrap s)))))
(defmethod stack-empty? ::vector-stack [s] (empty? (unwrap s))))
In fact, they are so similar, it seems clear you could write a macro to abstract out some of the commonalities.
In my mind, this is definitely the most interesting implementation of ADTs. By using multimethods, there is the potential to implement ADTs that would be rather difficult to implement efficiently in other languages. Unfortunately, it is also quite clear that this version is considerably more work to write than the naive, open unbundled version we started out with.
I would very much like to see additional syntactic support for making secure, unbundled ADTs easier to write. Something that can simplify or eliminate the need for explicit wrapping and unwrapping would be essential, and of course, a better system for generating security keys than gensym. It's not clear to me whether this support could be provided entirely by a library, or whether additional constructs in the core would be needed.
What about equality?
Another big open question in my mind is, "What happens when you try to add equality?" When you use Clojure's open structures to store your data (like maps), you get equality and hash codes for free. But how hard would it be to add equality and hash code functionality to these secure implementations of ADTs? Is it easier with some of these implementation styles than others? I haven't played around with this aspect of Clojure enough to give an answer yet. I'd love to hear comments from those who have.
Laziness in Clojure – Traps, workarounds, and experimental hacks
The power of laziness
Consider the following snippet of Clojure code, which is similar to code found in several Clojure blogs, and code found in Clojure's contributed code base:
(def whole-numbers (iterate inc 0))
This defines whole-numbers to a lazy sequence of all the numbers 0, 1, 2, …. Laziness is a powerful concept, and it allows us to simulate an infinite sequence. As you need more numbers, more are generated.
So for example, let's say you want to find the first whole number that satisfies a given predicate function (let's call it pred). In an imperative language, you might do something like this:
i = 0
while not pred(i):
i += 1
return i
On the other hand, with laziness, you can do something like this:
(first (filter pred whole-numbers))
Gotcha!
But there is a subtle problem with the definition of whole-numbers. Do you see it? I'll give you a hint: (iterate inc 0) does in fact produce the lazy sequence of whole numbers.
The problem is that we gave a name to it. Huh? How could that make a difference? Yes, as surprising as it may seem, (first (filter pred (iterate inc 0))) works just fine, whereas (first (filter pred whole-numbers)) is likely to run slowly, and may even crash your program at runtime for certain inputs.
How is this possible? How could giving something a name break your code? That's just crazy, right?
Well, this pitfall stems from a design decision in Clojure that all lazy sequences are cached. In other words, as Clojure expands the sequence, it automatically caches the values in a linked list, so that they never have to be computed again. The next time you traverse the sequence, you are just seeing the values that were previously cached.
Caching has a number of benefits. If your sequence is very computation-intensive, and you plan to traverse it multiple times, then caching can give a huge performance boost. If your sequence represents a traversal of some sort of non-persistent data structure, caching is essential to ensure that repeat calls to first and rest always yield the same result.
But Clojure caches all lazy sequences, which creates a number of traps for the unwary. The “unnamed” version of the code works because the garbage collector collects the cached cells as it goes. (Even though it is somewhat wasteful to cache all these cells and immediately throw them away, Java's collection of short-lived garbage is very fast, and it's not as much of a performance hit as you might expect). However, when you give a name to the whole-numbers, the garbage collector can't do any collection. As you traverse the whole-numbers sequence, you'll have a huge performance hit as massive gobs of memory are allocated. And eventually, you'll go too far, run out of memory, and your program crashes.
To see this in action, go compare the following on your Clojure setup:
(nth (iterate inc 0) 20000000)
(nth whole-numbers 20000000)
;uses the above def of whole-numbers
On my machine, the first example completes in just a few seconds. The other crashes.
Sometimes cached lazy sequences are very useful, but in this case, caching just gets in the way. No one would ever want to cache the whole-numbers sequence. It's significantly faster to generate the sequence via incrementing every time you need to traverse the sequence, than it is to cache and hold the values. Furthermore, because the sequence represents an infinite sequence, there's no upper limit to the memory consumption. If you use a named version of the whole numbers in your program, there's a good chance that eventually, your program will break with certain large enough inputs.
Workarounds
Programming is all about abstracting out commonalities in code, so it's rather frustrating that you can't give a name to the whole-numbers sequence and need to type (iterate inc 0) everywhere. This is a fairly short definition, but you can imagine how with a more complex sequence, giving a name might really be essential.
Fortunately, there is a workaround. Instead of giving the whole-numbers sequence a name, you give a name to a function that knows how to produce the sequence. In other words,
(defn whole-numbers [] (iterate inc 0))
Now, you have to change all your uses of whole-numbers to a function call as well:
(nth (whole-numbers) 20000000) ;This works!
The reason this works is that every time you call this function, it produces a brand new sequence of whole-numbers, which is unnamed, so the garbage collector can collect as it goes.
So when you're writing code in Clojure, every time you create a lazy sequence, you need to ask yourself two questions:
1. What will happen to my code if the entire sequence becomes realized in memory? Is the sequence too big to be accommodated?
2. Is it cheaper to generate the items in the sequence from scratch each time, than to allocate the memory necessary to cache the items?
If the answer to either of these questions is yes, then you should avoid naming your sequence, or wrap it in a function.
If you plan to program in Clojure, you need to be aware of this pitfall, and you must know the workaround. From what I can tell from the various blogs and posts on Clojure's google group, many new users are falling into this trap and naming potentially large sequences, creating fragile code with a very real danger of failure.
Not Satisfied
There are several reasons why I find this workaround unsatisfying.
1. It imposes a signifcant cognitive burden because it not only affects the way you name a sequence (by wrapping it in a function), but it also affects every place you use the sequence. In fact, once you have a library filled with a combination of regular lazy sequences, and some of these function-wrapped lazy sequences, then every time you use one of your sequences, you have to remember which kind it is in order to call it correctly.
2. It's not perfect. Although wrapping a sequence in a function prevents any global var from pointing at the sequence, it is still possible for some function that manipulates the sequence to accidentally hold onto a reference to some portion of the sequence, causing a memory crash. For example, it was recently pointed out on the google group that when filtering a large sequence (even if the sequence is unnamed), the filter function, as it is filtering, holds onto a portion of the sequence as it scans ahead to find the next element for the filtered sequence. If the elements that pass the filtering test are spread out too far, the program crashes. Several people looked closely at the code, and couldn't figure out why it was crashing. Eventually, Rich Hickey, the designer of the language pointed out what was going on. He plans to write a more complicated version of filter in a future version of Clojure that will avoid this particular problem, but that's not really the point. The concern here is that even knowing the function-wrapping trick, cached lazy sequences represent a certain kind of danger that is difficult to isolate and understand. When several Clojure programmers have trouble finding the source of a memory crash in a one-line piece of code, you can imagine how difficult it will be on a large body of code.
3. If you know for sure in advance that you'd rather not have the sequence be cached, there is currently no way to express that in Clojure. This workaround doesn't actually suppress caching, it just makes it so the garbage collector can throw away the cached values right away. You're still incurring a (small, but measurable) performance penalty from the unnecessary caching.
4. Even if the workaround worked reliably, this is a pretty big “gotcha” for newcomers to the language.
Is there another way?
A few weeks ago, I posted about this topic on the Clojure google group, questioning the wisdom of this design decision to have all lazy sequences cache their values. I argued that it would be a cleaner design if lazy sequences did not cache their values, unless the programmer explicitly asked for the sequences to be cached. Basically, my argument revolved around two main points. First, it's easier to convert an uncached sequence to a cached sequence than vice versa. Second, if you forget to cache something that should be cached, it's merely a performance optimization problem, but if you forget to uncache something that needs to be cached, your program can crash. So, it's safer if the language defaults to uncached.
Rich Hickey, the designer of Clojure, responded by saying that he had already experimented with uncached lazy sequences, and that such a choice causes a different set of problems with performance problems and code breaking – problems which he found to be even more common than the ones I've raised here. He encouraged me to try my own experiments, and report back. The rest of this blog post goes into more details about the nature of my experiments since that thread.
Four categories of sequences
Before taking the plunge and doing some experimenting, my first step was to analyze the various use cases of lazy sequences that need to be dealt with. I found that lazy sequences fall into one of four categories:
1. A sequence where the function to generate the sequence is very fast to compute, and the function is a “pure” function in the sense that it always generates the same output. For this use case, you never want to cache, and it becomes especially important if the sequence is large.
2. A sequence that will only be traversed once. If you really know for sure in advance that you will only be traversing it once, then you're better off not caching. Again, this is especially important if the sequence is large.
3. A sequence where it is slow to compute successive elements, and you'll possibly need to do this more than once. Caching in this case is important for performance.
4. A sequence where the function to generate successive elements is not guaranteed to return the same values. This can come up with Java interop, with I/O, and other sequence views of a non-persistent data structure. Caching is essential here to impose a sane, consistent view on the data.
RH seemed especially concerned about how category 4 sequences break without caching. Now I'll be the first to admit that I haven't written a huge body of work in Clojure, but looking through my code from the past few weeks, I discovered that I didn't have any category 4 sequences in my code. First, I don't tend to deal with Java interop; I write entirely using the core Clojure data structures, which are all immutable. So any lazy sequence I write is guaranteed to be consistent, even without caching. I also don't do much with I/O. I tend to just write functions that I can use interactively from the REPL.
Now if I were going to create a lazy sequence from an ephemeral source, I would almost certainly be using one of Clojure's built-in functions that do this conversion for you, such as line-seq, resultset-seq, file-seq, iterator-seq, enumeration-seq, etc., rather than using lazy-cons directly. So as long as those functions return a cached sequence, I pretty much don't have to worry about category 4. Furthermore, RH has said that he is working on a completely new abstraction (tentatively called streams), that (as I understand it) is a better fit for I/O and other ephemeral sources than the sequence abstraction. I speculate that once he has developed this new abstraction, the concerns about category 4 sequences will largely go away. People will generally write “streams” over ephemeral sources, and then convert them to cached lazy sequences with a one-time call to stream-seq. So as long as stream-seq builds a lazy sequence, category 4 is well supported, and we can analyze the relative merits of cached vs. uncached sequences for categories 1-3 separately.
Since category 4 sequences aren't really present in my own code base, my experiments mainly revolve around trying to discover what feels natural for categories 1-3.
Experiment #1 – Totally Uncached
Since RH experimented with uncached laziness in a previous version of Clojure, the code for an uncached lazy sequence builder is right there in the existing Clojure codebase, but the corresponding macro (lazy-seq as opposed to lazy-cons), has been commented out. So for my first experiment, I wanted to look at what it would feel like to code in an environment where everything in Clojure built from lazy-cons (except the ones that represent ephemeral “streams”) is uncached.
To do this, I created a namespace called uncached, in which I copied over most of Clojure's core constructs that use lazy-cons (but not file-seq, enumerator-seq, etc.). Within this namespace, I modified lazy-cons to create an uncached LazySeq rather than LazyCons. In other words,
(defmacro lazy-cons [first-expr rest-expr]
(list 'new 'clojure.lang.LazySeq
(list `fn (list [] first-expr) (list [(gensym)] rest-expr))))
So anything I import from this namespace will build an uncached sequence. To make the experiment as extreme as possible, I then went to every bit of Clojure code I've written (which I admit isn't that much, but hey, experiments have to start somewhere), and excluded all these functions from the core, using my uncached imported functions instead.
I found that by using uncached sequences, my code felt a little zippier from a performance standpoint. I found that the vast majority of the sequences that I construct are only used for one pass. Furthermore, most of my sequences are very long (possibly infinite), and can be generated quickly, so the uncached behavior was a great default fit for me.
One interesting example from my code is a function I wrote that produces a sequence of all the permutations of a given collection. Now generating the next permutation in the sequence is not exactly a trivial operation. So caching does in fact speed things up if you're going to traverse the permutation sequence multiple times. However, what I discovered is that there's such a huge time hit from allocating the memory for caching purposes the first time through, that you'd have to traverse the permutation sequence at least 20 times to begin to make up for the time lost from caching. Even so, caching becomes completely impractical once you hit permutations of 10+ items, so I've concluded that a permutations sequence should just be uncached.
Now at one point, I applied a filter to my permutation sequence, to extract permutations that had a certain property. This filtered sequence is something that did in fact make sense to cache, provided I intended to use it more than once Fortunately, the Clojure api already includes a function called cache-seq which does exactly that. I found it very easy to get the caching behavior I wanted for this specific case – at the point where I defined the filtered sequence, I wrapped it in a call to cache-seq. Alternatively, I could have called vec on the sequence to explicitly realize the sequence.
(def fs (cache-seq (filter pred (permutations (range 10)))))
So, at least in my own code, the default of not caching sequences worked rather well. There was one instance where I needed to cache the sequence, and it was easy to accomplish that. But again, I need to admit that I've only written a small amount of Clojure code (probably no more than 2kloc). So I can't claim this proves anything. I'm providing my simple uncached library so that others can also try this very interesting experiment.
Experiment #2 – Take your pick
If we assume that defaulting to uncached isn't right for everyone, there's still the open question of what it would be like to program in a version of Clojure that offers a choice between cached and uncached versions of lazy-cons and its core sequence functions. To explore this option, I made use of the same “uncached” library, but rather than excluding the core functions and overriding them with the uncached versions, I just “require”d my uncached library so that both versions of functions were available to me. So I could call lazy-cons or uncached/lazy-cons, map or uncached/map, for or uncached/for.
One really nice aspect of Clojure's design is the way that anything that adheres to the sequence interface works just fine with all the sequence functions. So the amazing thing is that whether you choose to build a cached or uncached sequence, it makes not one bit of difference to the consumers of your sequence. So once you make your decision as to whether a sequence should be cached or uncached, you can basically just forget about it and everything works seamlessly as you pass that sequence around.
Despite that, at first it felt like a burden to have to constantly think about whether I needed a cached or uncached version of a sequence. But then again, I had already been doing similar analysis to avoid getting burned by a memory crash from caching, so really it wasn't much different than before. The main difference was that now I could really specify that I wanted something uncached, rather than using the function-wrapping workaround. Consuming the two types of sequences was now equally easy, and I got a slight performance boost as well.
I also noticed something rather interesting in the patterns of when I tended to call cached vs. uncached versions of the core functions.
For some of the functions, I was always calling the uncached versions, namely cycle, repeat, replicate, interleave, interpose, take, take-while, butlast, concat, and lazy-cat. And as I think about it further, I honestly can't think of any time you'd want a cached version of these functions. Remember that if your underlying sequences that you are operating on are cached, these functions will be equally persistent, so it's really a question of how time-consuming their operations are, and these have very little overhead. For this reason, I believe that, even if Clojure makes no other changes in its approach to laziness, it would be a simple, non-breaking, but significant improvement to change the above core functions to internally use lazy-seq as opposed to lazy-cons.
On the other hand, I found that distinct, filter, remove, and drop-while were the most likely to need to be cached.
If everything cleanly fell into the category of either definitely needing to be cached, or definitely needing to not be cached, things would be simple. Alas, that is not the case. For things like map, for, drop, and take-nth, it all totally depends on how complex the functions are (or how big the n is).
So for those functions, it is very useful to be able to choose cached or uncached. But this begs the question of what will happen when other programmers start creating sequence-producing functions. In some cases they'll be able to make an executive decision in advance as to whether the resulting sequence is cached or uncached. But what about the cases where the consumer will need to be able to make a choice. Do we expect the programmer to provide both a cached and an uncached version?
Contrast this with experiment #1, in which lazy-cons always produces uncached sequences. With such behavior, the programmer of a new sequence-producing function just uses (uncached) lazy-cons – the consumer knows it will be uncached, and can easily turn it into cached at point of naming, if necessary.
Summarizing Experiment #2, I'll say that I really liked having added control, and the ability to select cached or uncached sequences, but I just can't imagine how people will easily write libraries that provide both options.
Experiment #3 – Intelligent auto-selection
Since some of the sequences should clearly be cached, and some clearly not cached, it would be ideal if the borderline cases could be chosen intelligently by the language in ordrer to completely remove the cognitive burden of constantly having to choose. At first, I thought maybe a scheme would work in which the cached/uncached behavior of the lazy-cons depends on the nature of the thing you're consing onto. But this isn't really useful. The desired cached/uncached behavior depends more on the complexity of the delayed function. After some experimentation, I feel that it is not possible to automate the decision. So this experiment was definitely a failure.
Experiment #4 – Uncaching a cached sequence
The major problem with Experiment #2 is that it forces library writers to supply two flavors of their sequence generating functions, which is impractical. So I tried to get really clever. For this experiment, I went back to the standard lazy-cons behavior, i.e., caching by default. But then, I tried to write a macro that would suppress caching for any sequence built with lazy-cons. I did this by setting up a global *cached* var that has a root binding of true. Lazy-cons does whatever behavior the *cached* var is set to. A special uncached macro binds the var to false. Like this:
(def *cached* true)
(defmacro lazy-cons
[first-expr rest-expr]
(list 'if '*cached*
(list 'new 'clojure.lang.LazyCons
(list `fn (list [] first-expr)
(list [(gensym)] rest-expr)))
(list 'new 'clojure.lang.LazySeq
(list `fn (list [] (list 'binding ['*cached* 'false]
first-expr))
(list [(gensym)] (list 'binding ['*cached* 'false]
rest-expr))))))
(defmacro uncached [& rst]
`(binding [*cached* false]
~@rst))
This basically works, in the sense that you can say something like (uncached (map * (iterate inc 0) (iterate inc 1)))) and the uncached macro affects all the calls to lazy-cons within map and iterate, so you've forced this thing to be uncached “all the way down”. But the way my macro works, uncached sequences become extremely slow. Because bindings aren't captured by the closures, the instruction to rebind *cached* has to be threaded through the delayed closures. This noticeably hinders the performance of uncached sequences. If you flipped it around and made uncached the default, then cached sequences would suffer the performance hit. Is there a better way to write this macro? If not, I must decree this experiment to be a failure.
Conclusions
I find Clojure's current cache-by-default-with-no-option-for-uncached laziness to be unsatisfying. I genuinely hope there is a better solution, and I want to help find it. Clearly sequences generated from ephemeral stream-like entitities must always be cached. But dealing with the other types of sequences, I find that my experiments with uncached-by-default-with-option-for-cached laziness turned out to be quite pleasant. This may very well be a function of my own programming niche, so I've provided a simple uncached library so others can try to replicate this experiment with their own code. If more people report success with uncached-by-default, maybe a stronger case can be made for change.
My other experiments were less successful, although I learned quite a bit from trying them, which is why I reported on those experiments as well. Most importantly, I gained a deeper understanding of what types of functions tend to produce sequences that need to be cached and which ones tend to produce sequences that should be uncached. This suggests that, at a minimum, some of the core library functions would benefit from being changed to produce uncached sequences.
Perhaps someone else will see a way to turn one of these approaches into something workable, or provide an entirely new solution.
---转于http://programming-puzzler.blogspot.com/2009/04/adts-in-clojure.html
Today on the Clojure mailing list, a user asked:
"Is the concept of Abstract Data Types useful in Clojure?
If yes, how would you implement one?"
This is an important question because coding to interfaces and abstractions, rather than concrete data, is a key aspect to writing scalable programs.
My favorite treatment of ADTs is in Concepts, Techniques, and Models of Programming by Peter Van Roy and Seif Haridi. This book shows all the variations of ADTs: stateful and declarative (aka functional), bundled and unbundled, open and secure. I'm going to walk through a few of these variations in Clojure.
Let's consider a classic stack ADT. Since Clojure primarily emphasizes functional programming, I'm going to focus on functional implementations of this ADT, i.e., versions in which pushing and popping a stack are non-destructive and actually return a new stack.
Open unbundled
Now the whole point of an ADT is to separate the implementation from the interface, but for starters, let's implement a stack ADT using an underlying implementation of a list. In an "unbundled" implementation, the stack data is completely separate from the functions that operate on it. It is "open" because we make no attempt to hide the implementation details of the stack. We are simply going to trust the programmer to use our interface and ignore those details.
(defn stack-new [] nil)
(defn stack-push [s e] (cons e s))
(defn stack-top [s] (first s))
(defn stack-pop [s] (rest s))
(defn stack-empty? [s] (empty? s))
This open, unbundled implementation is probably the simplest way to code an ADT in Clojure. Because it is the simplest, this is probably the most common kind of coding you'll see in Clojure, and thus many would argue it is the most idiomatic. But just because it's the easiest to code doesn't mean it's the best.
The most obvious problem with an open implementation is that it is a leaky abstraction. In other words, client code can easily see that this stack is implemented as a list. And it would be all too easy to forget that this implementation is supposed to be hidden, and call a list function on our stack (e.g., count).
I have seen several comments on the clojure mailing list suggesting that it is inevitable that any ADT written in Clojure will be inherently leaky. But that is not the case...
Secure unbundled
The idea is to wrap the data structure in something that effectively locks it with a unique key, so it can only be unwrapped with the correct key. The only functions that know the key are the ones that serve as the interface for the ADT, so nothing else can inspect or tamper with the contents of the data structure. For optimum security you'd need to use cryptographic techniques, but for illustration purposes, I've simply used gensym to generate a unique key.
(let [security-key (gensym),
wrap (fn [x]
(fn [k]
(if (= security-key k) x (throw (new Exception))))),
unwrap (fn [x] (x security-key))]
(defn stack-new [] (wrap nil))
(defn stack-push [s e] (wrap (cons e (unwrap s))))
(defn stack-top [s] (first (unwrap s)))
(defn stack-pop [s] (if (stack-empty? s) nil
(wrap (rest (unwrap s)))))
(defn stack-empty? [s] (empty? (unwrap s))))
Notice how wrap and unwrap are local to these functions. If you play around with these stack functions, you'll see that the stacks you generate appear to the outside world as functions, and you need to pass it the right key to get at the innards. In this case, a gensym is relatively easy to forge, but you should get the idea. The innards are very well protected and there is essentially no way to manipulate the stack other than through the approved interface functions.
One problem with unbundled ADTs is that they lack polymorphism. If you have multiple stack implementations, each one will require its own set of interface functions with different names (or the same names in different namespaces). If you want to write a function that operates over any stack implementation, it will first require you to pass in some sort of map of the interface functions to use. Although this technique is relatively common in functional languages (especially in the ML family), it's a fairly clunky way to achieve polymorphism.
Secure bundled
The next logical step is to bundle the data along with the functions that know how to operate on it. This looks something like this:
(let [stack-object (fn stack-object [s]
(let [push (fn [e] (stack-object (cons e s))),
top (fn [] (first s)),
pop (fn [] (if (seq s) (stack-object (rest s)) nil)),
empty? (fn [] (empty? s))]
{:push push, :top top, :pop pop, :empty? empty?}))]
(defn stack-new [] (stack-object nil)))
In this implementation, a stack is an associative map of four interface functions. The actual data of the stack is hidden by lexical scoping so that only the interface functions can see it.
The big problem here is that the syntax for manipulating these bundled stacks is fairly unpleasant. For example,
(def stack1 (stack-new))
(def stack2 ((stack1 :push) 2))
(def stack2-top ((stack2 :top)))
You can improve the readability by providing interface functions that look like the unbundled version:
(defn stack-push [s e] ((s :push) e))
(defn stack-top [s] ((s :top)))
(defn stack-pop [s] ((s :pop)))
(defn stack-empty? [s] ((s :empty?)))
But there's a big difference between this and the unbundled version. Mainly, we get polymorphism. If you have two different concrete implementations of the ADT, your client code doesn't care. When you call stack-push, for example, the function looks up the correct push function for this given implementation in the bundle, i.e., (s :push) and calls that to do the pushing.
Secure bundled, another way
You may have noticed that the secure bundled approach is essentially the way that OO languages provide ADTs. Since Clojure interoperates with Java, it stands to reason that you should be able to use Java to implement your ADTs. Yes, you can.
The first step is to define your interface. You can either do this directly in Java, or use Clojure's ability to generate Java interfaces with something like this:
(gen-interface
:name adt.IStack
:methods [[push [Object] adt.IStack]
[top [] Object]
[pop [] adt.IStack]
[empty? [] Boolean]])
This code would need to be compiled with Clojure's compile function, and that requires a bit of tricky setting up of classpaths and namespaces, but it's doable. Then, the bundled ADT looks very similar to above:
(let [stack-object (fn stack-object [s]
(proxy [adt.IStack] []
(push [e] (stack-object (cons e s)))
(top [] (first s))
(pop [] (if (seq s) (rest s) nil))
(empty? [] (empty? s))))]
(defn stack-new [] (stack-object nil)))
You could also make it even more Java-like by using gen-class. The dot syntax is a bit cleaner than the map-based syntax of the previous version, for example:
(. s top)
(. s push 2)
And as before, you can clean it up even more by creating unbundled interface functions that actually call the bundled versions behind the scenes.
This is how most of the Clojure core interfaces and data structures are implemented, so in some sense, one could argue that this is the most idiomatic approach of all. However, I think many Clojurians would prefer to get away from the Java approach if there is a better way.
And bundled versions are not without their problems. The CTM book gives a great example of a collection ADT and the challenge of writing a union function on the two collections. The problem with the bundled version is that bundles must dispatch on one input, so the union function, when written from the perspective of the first collection, doesn't have access to the private parts of the second collection. A version of the ADT that can see the innards of both inputs might be considerably more efficient.
Secure unbundled, revisited
If the primary limitation of bundled ADT implementations is their single-dispatch nature, perhaps there is a way to go back to the secure unbundled version, but leverage Clojure's multimethods to gain a more sophisticated kind of polymorphism.
In this final, most sophisticated variant, I'm going to go ahead and show two concrete implementations, the list-based one we've been working with as well as a vector-based implementation, so you can see how the two implementations coexist side by side.
First, we define the interface as multimethods that dispatch on the type of stack. Notice how we don't include the constructor stack-new as part of the polymorphic interface. We'll need a separate constructor for each concrete implmentation.
(defmulti stack-push (fn [s e] (type s)))
(defmulti stack-top type)
(defmulti stack-pop type)
(defmulti stack-empty? type)
Since we're dispatching on type, we can't quite use the same wrapping representation as before (because functions can't have metadata). This time, the wrapped representation of the stack will be a map with one field (:wrapped-stack) and metadata with the appropriate type.
(let [security-key (gensym),
wrap (fn [x]
(with-meta
{:wrapped-stack
(fn [k]
(if (= security-key k) x (throw (new Exception))))}
{:type ::list-stack})),
unwrap (fn [x] ((x :wrapped-stack) security-key))]
(defn list-stack-new [] (wrap nil))
(defmethod stack-push ::list-stack [s e] (wrap (cons e (unwrap s))))
(defmethod stack-top ::list-stack [s] (first (unwrap s)))
(defmethod stack-pop ::list-stack [s] (if (stack-empty? s) nil
(wrap (rest (unwrap s)))))
(defmethod stack-empty? ::list-stack [s] (empty? (unwrap s))))
The vector-based version is almost the same:
(let [security-key (gensym),
wrap (fn [x]
(with-meta
{:wrapped-stack
(fn [k]
(if (= security-key k) x (throw (new Exception))))}
{:type ::vector-stack})),
unwrap (fn [x] ((x :wrapped-stack) security-key))]
(defn vector-stack-new [] (wrap []))
(defmethod stack-push ::vector-stack [s e] (wrap (conj (unwrap s) e)))
(defmethod stack-top ::vector-stack [s] (peek (unwrap s)))
(defmethod stack-pop ::vector-stack [s] (if (stack-empty? s) nil
(wrap (pop (unwrap s)))))
(defmethod stack-empty? ::vector-stack [s] (empty? (unwrap s))))
In fact, they are so similar, it seems clear you could write a macro to abstract out some of the commonalities.
In my mind, this is definitely the most interesting implementation of ADTs. By using multimethods, there is the potential to implement ADTs that would be rather difficult to implement efficiently in other languages. Unfortunately, it is also quite clear that this version is considerably more work to write than the naive, open unbundled version we started out with.
I would very much like to see additional syntactic support for making secure, unbundled ADTs easier to write. Something that can simplify or eliminate the need for explicit wrapping and unwrapping would be essential, and of course, a better system for generating security keys than gensym. It's not clear to me whether this support could be provided entirely by a library, or whether additional constructs in the core would be needed.
What about equality?
Another big open question in my mind is, "What happens when you try to add equality?" When you use Clojure's open structures to store your data (like maps), you get equality and hash codes for free. But how hard would it be to add equality and hash code functionality to these secure implementations of ADTs? Is it easier with some of these implementation styles than others? I haven't played around with this aspect of Clojure enough to give an answer yet. I'd love to hear comments from those who have.
Laziness in Clojure – Traps, workarounds, and experimental hacks
The power of laziness
Consider the following snippet of Clojure code, which is similar to code found in several Clojure blogs, and code found in Clojure's contributed code base:
(def whole-numbers (iterate inc 0))
This defines whole-numbers to a lazy sequence of all the numbers 0, 1, 2, …. Laziness is a powerful concept, and it allows us to simulate an infinite sequence. As you need more numbers, more are generated.
So for example, let's say you want to find the first whole number that satisfies a given predicate function (let's call it pred). In an imperative language, you might do something like this:
i = 0
while not pred(i):
i += 1
return i
On the other hand, with laziness, you can do something like this:
(first (filter pred whole-numbers))
Gotcha!
But there is a subtle problem with the definition of whole-numbers. Do you see it? I'll give you a hint: (iterate inc 0) does in fact produce the lazy sequence of whole numbers.
The problem is that we gave a name to it. Huh? How could that make a difference? Yes, as surprising as it may seem, (first (filter pred (iterate inc 0))) works just fine, whereas (first (filter pred whole-numbers)) is likely to run slowly, and may even crash your program at runtime for certain inputs.
How is this possible? How could giving something a name break your code? That's just crazy, right?
Well, this pitfall stems from a design decision in Clojure that all lazy sequences are cached. In other words, as Clojure expands the sequence, it automatically caches the values in a linked list, so that they never have to be computed again. The next time you traverse the sequence, you are just seeing the values that were previously cached.
Caching has a number of benefits. If your sequence is very computation-intensive, and you plan to traverse it multiple times, then caching can give a huge performance boost. If your sequence represents a traversal of some sort of non-persistent data structure, caching is essential to ensure that repeat calls to first and rest always yield the same result.
But Clojure caches all lazy sequences, which creates a number of traps for the unwary. The “unnamed” version of the code works because the garbage collector collects the cached cells as it goes. (Even though it is somewhat wasteful to cache all these cells and immediately throw them away, Java's collection of short-lived garbage is very fast, and it's not as much of a performance hit as you might expect). However, when you give a name to the whole-numbers, the garbage collector can't do any collection. As you traverse the whole-numbers sequence, you'll have a huge performance hit as massive gobs of memory are allocated. And eventually, you'll go too far, run out of memory, and your program crashes.
To see this in action, go compare the following on your Clojure setup:
(nth (iterate inc 0) 20000000)
(nth whole-numbers 20000000)
;uses the above def of whole-numbers
On my machine, the first example completes in just a few seconds. The other crashes.
Sometimes cached lazy sequences are very useful, but in this case, caching just gets in the way. No one would ever want to cache the whole-numbers sequence. It's significantly faster to generate the sequence via incrementing every time you need to traverse the sequence, than it is to cache and hold the values. Furthermore, because the sequence represents an infinite sequence, there's no upper limit to the memory consumption. If you use a named version of the whole numbers in your program, there's a good chance that eventually, your program will break with certain large enough inputs.
Workarounds
Programming is all about abstracting out commonalities in code, so it's rather frustrating that you can't give a name to the whole-numbers sequence and need to type (iterate inc 0) everywhere. This is a fairly short definition, but you can imagine how with a more complex sequence, giving a name might really be essential.
Fortunately, there is a workaround. Instead of giving the whole-numbers sequence a name, you give a name to a function that knows how to produce the sequence. In other words,
(defn whole-numbers [] (iterate inc 0))
Now, you have to change all your uses of whole-numbers to a function call as well:
(nth (whole-numbers) 20000000) ;This works!
The reason this works is that every time you call this function, it produces a brand new sequence of whole-numbers, which is unnamed, so the garbage collector can collect as it goes.
So when you're writing code in Clojure, every time you create a lazy sequence, you need to ask yourself two questions:
1. What will happen to my code if the entire sequence becomes realized in memory? Is the sequence too big to be accommodated?
2. Is it cheaper to generate the items in the sequence from scratch each time, than to allocate the memory necessary to cache the items?
If the answer to either of these questions is yes, then you should avoid naming your sequence, or wrap it in a function.
If you plan to program in Clojure, you need to be aware of this pitfall, and you must know the workaround. From what I can tell from the various blogs and posts on Clojure's google group, many new users are falling into this trap and naming potentially large sequences, creating fragile code with a very real danger of failure.
Not Satisfied
There are several reasons why I find this workaround unsatisfying.
1. It imposes a signifcant cognitive burden because it not only affects the way you name a sequence (by wrapping it in a function), but it also affects every place you use the sequence. In fact, once you have a library filled with a combination of regular lazy sequences, and some of these function-wrapped lazy sequences, then every time you use one of your sequences, you have to remember which kind it is in order to call it correctly.
2. It's not perfect. Although wrapping a sequence in a function prevents any global var from pointing at the sequence, it is still possible for some function that manipulates the sequence to accidentally hold onto a reference to some portion of the sequence, causing a memory crash. For example, it was recently pointed out on the google group that when filtering a large sequence (even if the sequence is unnamed), the filter function, as it is filtering, holds onto a portion of the sequence as it scans ahead to find the next element for the filtered sequence. If the elements that pass the filtering test are spread out too far, the program crashes. Several people looked closely at the code, and couldn't figure out why it was crashing. Eventually, Rich Hickey, the designer of the language pointed out what was going on. He plans to write a more complicated version of filter in a future version of Clojure that will avoid this particular problem, but that's not really the point. The concern here is that even knowing the function-wrapping trick, cached lazy sequences represent a certain kind of danger that is difficult to isolate and understand. When several Clojure programmers have trouble finding the source of a memory crash in a one-line piece of code, you can imagine how difficult it will be on a large body of code.
3. If you know for sure in advance that you'd rather not have the sequence be cached, there is currently no way to express that in Clojure. This workaround doesn't actually suppress caching, it just makes it so the garbage collector can throw away the cached values right away. You're still incurring a (small, but measurable) performance penalty from the unnecessary caching.
4. Even if the workaround worked reliably, this is a pretty big “gotcha” for newcomers to the language.
Is there another way?
A few weeks ago, I posted about this topic on the Clojure google group, questioning the wisdom of this design decision to have all lazy sequences cache their values. I argued that it would be a cleaner design if lazy sequences did not cache their values, unless the programmer explicitly asked for the sequences to be cached. Basically, my argument revolved around two main points. First, it's easier to convert an uncached sequence to a cached sequence than vice versa. Second, if you forget to cache something that should be cached, it's merely a performance optimization problem, but if you forget to uncache something that needs to be cached, your program can crash. So, it's safer if the language defaults to uncached.
Rich Hickey, the designer of Clojure, responded by saying that he had already experimented with uncached lazy sequences, and that such a choice causes a different set of problems with performance problems and code breaking – problems which he found to be even more common than the ones I've raised here. He encouraged me to try my own experiments, and report back. The rest of this blog post goes into more details about the nature of my experiments since that thread.
Four categories of sequences
Before taking the plunge and doing some experimenting, my first step was to analyze the various use cases of lazy sequences that need to be dealt with. I found that lazy sequences fall into one of four categories:
1. A sequence where the function to generate the sequence is very fast to compute, and the function is a “pure” function in the sense that it always generates the same output. For this use case, you never want to cache, and it becomes especially important if the sequence is large.
2. A sequence that will only be traversed once. If you really know for sure in advance that you will only be traversing it once, then you're better off not caching. Again, this is especially important if the sequence is large.
3. A sequence where it is slow to compute successive elements, and you'll possibly need to do this more than once. Caching in this case is important for performance.
4. A sequence where the function to generate successive elements is not guaranteed to return the same values. This can come up with Java interop, with I/O, and other sequence views of a non-persistent data structure. Caching is essential here to impose a sane, consistent view on the data.
RH seemed especially concerned about how category 4 sequences break without caching. Now I'll be the first to admit that I haven't written a huge body of work in Clojure, but looking through my code from the past few weeks, I discovered that I didn't have any category 4 sequences in my code. First, I don't tend to deal with Java interop; I write entirely using the core Clojure data structures, which are all immutable. So any lazy sequence I write is guaranteed to be consistent, even without caching. I also don't do much with I/O. I tend to just write functions that I can use interactively from the REPL.
Now if I were going to create a lazy sequence from an ephemeral source, I would almost certainly be using one of Clojure's built-in functions that do this conversion for you, such as line-seq, resultset-seq, file-seq, iterator-seq, enumeration-seq, etc., rather than using lazy-cons directly. So as long as those functions return a cached sequence, I pretty much don't have to worry about category 4. Furthermore, RH has said that he is working on a completely new abstraction (tentatively called streams), that (as I understand it) is a better fit for I/O and other ephemeral sources than the sequence abstraction. I speculate that once he has developed this new abstraction, the concerns about category 4 sequences will largely go away. People will generally write “streams” over ephemeral sources, and then convert them to cached lazy sequences with a one-time call to stream-seq. So as long as stream-seq builds a lazy sequence, category 4 is well supported, and we can analyze the relative merits of cached vs. uncached sequences for categories 1-3 separately.
Since category 4 sequences aren't really present in my own code base, my experiments mainly revolve around trying to discover what feels natural for categories 1-3.
Experiment #1 – Totally Uncached
Since RH experimented with uncached laziness in a previous version of Clojure, the code for an uncached lazy sequence builder is right there in the existing Clojure codebase, but the corresponding macro (lazy-seq as opposed to lazy-cons), has been commented out. So for my first experiment, I wanted to look at what it would feel like to code in an environment where everything in Clojure built from lazy-cons (except the ones that represent ephemeral “streams”) is uncached.
To do this, I created a namespace called uncached, in which I copied over most of Clojure's core constructs that use lazy-cons (but not file-seq, enumerator-seq, etc.). Within this namespace, I modified lazy-cons to create an uncached LazySeq rather than LazyCons. In other words,
(defmacro lazy-cons [first-expr rest-expr]
(list 'new 'clojure.lang.LazySeq
(list `fn (list [] first-expr) (list [(gensym)] rest-expr))))
So anything I import from this namespace will build an uncached sequence. To make the experiment as extreme as possible, I then went to every bit of Clojure code I've written (which I admit isn't that much, but hey, experiments have to start somewhere), and excluded all these functions from the core, using my uncached imported functions instead.
I found that by using uncached sequences, my code felt a little zippier from a performance standpoint. I found that the vast majority of the sequences that I construct are only used for one pass. Furthermore, most of my sequences are very long (possibly infinite), and can be generated quickly, so the uncached behavior was a great default fit for me.
One interesting example from my code is a function I wrote that produces a sequence of all the permutations of a given collection. Now generating the next permutation in the sequence is not exactly a trivial operation. So caching does in fact speed things up if you're going to traverse the permutation sequence multiple times. However, what I discovered is that there's such a huge time hit from allocating the memory for caching purposes the first time through, that you'd have to traverse the permutation sequence at least 20 times to begin to make up for the time lost from caching. Even so, caching becomes completely impractical once you hit permutations of 10+ items, so I've concluded that a permutations sequence should just be uncached.
Now at one point, I applied a filter to my permutation sequence, to extract permutations that had a certain property. This filtered sequence is something that did in fact make sense to cache, provided I intended to use it more than once Fortunately, the Clojure api already includes a function called cache-seq which does exactly that. I found it very easy to get the caching behavior I wanted for this specific case – at the point where I defined the filtered sequence, I wrapped it in a call to cache-seq. Alternatively, I could have called vec on the sequence to explicitly realize the sequence.
(def fs (cache-seq (filter pred (permutations (range 10)))))
So, at least in my own code, the default of not caching sequences worked rather well. There was one instance where I needed to cache the sequence, and it was easy to accomplish that. But again, I need to admit that I've only written a small amount of Clojure code (probably no more than 2kloc). So I can't claim this proves anything. I'm providing my simple uncached library so that others can also try this very interesting experiment.
Experiment #2 – Take your pick
If we assume that defaulting to uncached isn't right for everyone, there's still the open question of what it would be like to program in a version of Clojure that offers a choice between cached and uncached versions of lazy-cons and its core sequence functions. To explore this option, I made use of the same “uncached” library, but rather than excluding the core functions and overriding them with the uncached versions, I just “require”d my uncached library so that both versions of functions were available to me. So I could call lazy-cons or uncached/lazy-cons, map or uncached/map, for or uncached/for.
One really nice aspect of Clojure's design is the way that anything that adheres to the sequence interface works just fine with all the sequence functions. So the amazing thing is that whether you choose to build a cached or uncached sequence, it makes not one bit of difference to the consumers of your sequence. So once you make your decision as to whether a sequence should be cached or uncached, you can basically just forget about it and everything works seamlessly as you pass that sequence around.
Despite that, at first it felt like a burden to have to constantly think about whether I needed a cached or uncached version of a sequence. But then again, I had already been doing similar analysis to avoid getting burned by a memory crash from caching, so really it wasn't much different than before. The main difference was that now I could really specify that I wanted something uncached, rather than using the function-wrapping workaround. Consuming the two types of sequences was now equally easy, and I got a slight performance boost as well.
I also noticed something rather interesting in the patterns of when I tended to call cached vs. uncached versions of the core functions.
For some of the functions, I was always calling the uncached versions, namely cycle, repeat, replicate, interleave, interpose, take, take-while, butlast, concat, and lazy-cat. And as I think about it further, I honestly can't think of any time you'd want a cached version of these functions. Remember that if your underlying sequences that you are operating on are cached, these functions will be equally persistent, so it's really a question of how time-consuming their operations are, and these have very little overhead. For this reason, I believe that, even if Clojure makes no other changes in its approach to laziness, it would be a simple, non-breaking, but significant improvement to change the above core functions to internally use lazy-seq as opposed to lazy-cons.
On the other hand, I found that distinct, filter, remove, and drop-while were the most likely to need to be cached.
If everything cleanly fell into the category of either definitely needing to be cached, or definitely needing to not be cached, things would be simple. Alas, that is not the case. For things like map, for, drop, and take-nth, it all totally depends on how complex the functions are (or how big the n is).
So for those functions, it is very useful to be able to choose cached or uncached. But this begs the question of what will happen when other programmers start creating sequence-producing functions. In some cases they'll be able to make an executive decision in advance as to whether the resulting sequence is cached or uncached. But what about the cases where the consumer will need to be able to make a choice. Do we expect the programmer to provide both a cached and an uncached version?
Contrast this with experiment #1, in which lazy-cons always produces uncached sequences. With such behavior, the programmer of a new sequence-producing function just uses (uncached) lazy-cons – the consumer knows it will be uncached, and can easily turn it into cached at point of naming, if necessary.
Summarizing Experiment #2, I'll say that I really liked having added control, and the ability to select cached or uncached sequences, but I just can't imagine how people will easily write libraries that provide both options.
Experiment #3 – Intelligent auto-selection
Since some of the sequences should clearly be cached, and some clearly not cached, it would be ideal if the borderline cases could be chosen intelligently by the language in ordrer to completely remove the cognitive burden of constantly having to choose. At first, I thought maybe a scheme would work in which the cached/uncached behavior of the lazy-cons depends on the nature of the thing you're consing onto. But this isn't really useful. The desired cached/uncached behavior depends more on the complexity of the delayed function. After some experimentation, I feel that it is not possible to automate the decision. So this experiment was definitely a failure.
Experiment #4 – Uncaching a cached sequence
The major problem with Experiment #2 is that it forces library writers to supply two flavors of their sequence generating functions, which is impractical. So I tried to get really clever. For this experiment, I went back to the standard lazy-cons behavior, i.e., caching by default. But then, I tried to write a macro that would suppress caching for any sequence built with lazy-cons. I did this by setting up a global *cached* var that has a root binding of true. Lazy-cons does whatever behavior the *cached* var is set to. A special uncached macro binds the var to false. Like this:
(def *cached* true)
(defmacro lazy-cons
[first-expr rest-expr]
(list 'if '*cached*
(list 'new 'clojure.lang.LazyCons
(list `fn (list [] first-expr)
(list [(gensym)] rest-expr)))
(list 'new 'clojure.lang.LazySeq
(list `fn (list [] (list 'binding ['*cached* 'false]
first-expr))
(list [(gensym)] (list 'binding ['*cached* 'false]
rest-expr))))))
(defmacro uncached [& rst]
`(binding [*cached* false]
~@rst))
This basically works, in the sense that you can say something like (uncached (map * (iterate inc 0) (iterate inc 1)))) and the uncached macro affects all the calls to lazy-cons within map and iterate, so you've forced this thing to be uncached “all the way down”. But the way my macro works, uncached sequences become extremely slow. Because bindings aren't captured by the closures, the instruction to rebind *cached* has to be threaded through the delayed closures. This noticeably hinders the performance of uncached sequences. If you flipped it around and made uncached the default, then cached sequences would suffer the performance hit. Is there a better way to write this macro? If not, I must decree this experiment to be a failure.
Conclusions
I find Clojure's current cache-by-default-with-no-option-for-uncached laziness to be unsatisfying. I genuinely hope there is a better solution, and I want to help find it. Clearly sequences generated from ephemeral stream-like entitities must always be cached. But dealing with the other types of sequences, I find that my experiments with uncached-by-default-with-option-for-cached laziness turned out to be quite pleasant. This may very well be a function of my own programming niche, so I've provided a simple uncached library so others can try to replicate this experiment with their own code. If more people report success with uncached-by-default, maybe a stronger case can be made for change.
My other experiments were less successful, although I learned quite a bit from trying them, which is why I reported on those experiments as well. Most importantly, I gained a deeper understanding of what types of functions tend to produce sequences that need to be cached and which ones tend to produce sequences that should be uncached. This suggests that, at a minimum, some of the core library functions would benefit from being changed to produce uncached sequences.
Perhaps someone else will see a way to turn one of these approaches into something workable, or provide an entirely new solution.
---转于http://programming-puzzler.blogspot.com/2009/04/adts-in-clojure.html
相关推荐
ADT 0.9.7,ADT8.0.1,ADT10.0.0,ADT12.0.0,ADT15.0.0(不需要下载分数) 搞了2年android了,附件可以用rar打开。
Android ADT,全称为Android Developer Tools,是Google推出的一个集成开发环境(IDE),专为Android应用开发者设计。这个离线包ADT-23.0.6.zip包含了开发者在没有网络连接时,仍然可以安装和使用的全套工具。下面将...
**Eclipse Android Development Tools (ADT) 插件详解** Eclipse Android Development Tools(ADT)是Android开发者必备的工具,它是一个强大的Eclipse插件,专为Android应用开发而设计。ADT提供了丰富的功能,使得...
ADT(Android Developer Tools)是Android开发者用于构建、调试和发布Android应用的集成开发环境(IDE)的一个关键组件。此压缩包文件包含了ADT的两个版本:0.9.7和10.0.1,它们代表了ADT在不同时期的发展阶段。 ...
抽象数据类型(Abstract Data Type,ADT)是计算机科学中的一种概念,它为数据结构和在这些结构上操作的方法提供了一种形式化的定义。ADT是独立于具体实现和编程语言的,它只关注数据的逻辑结构以及可以对其进行的...
android ADT-23.0.7,用于解决无法显示手机布局的错误: This version of the rendering library is more recent than your version of ADT plug-in. Please update。 此部分为part2;
### ADT下载地址及相关知识点详解 #### 一、ADT简介 ADT(Android Development Tools)是Android开发工具包中的一个重要组成部分,它是一款基于Eclipse的插件,为开发者提供了全面的Android应用开发环境。ADT插件可以...
adt是Android Developer Tools的缩写,它是Android Studio的前身,为Android应用开发提供了一系列集成工具。这个压缩包包含了adt的几个不同版本,分别是21.0、21.0.1、22.0和22.6.2。每个版本都有其发布时的特定功能...
**Android Development Toolkit (ADT) 24.0.2** ADT,全称为Android Development Toolkit,是Android开发者用于构建、调试和发布Android应用程序的重要工具集。版本24.0.2是一个特定的更新,旨在提供对Java ...
ADT(Android Developer Tools)是谷歌为Android开发者提供的一个集成开发环境插件,主要用于Eclipse IDE,它包含了一系列用于创建、调试和优化Android应用程序的工具。在本文中,我们将深入探讨ADT及其重要性,以及...
本文将详细解析如何通过SPI接口来实现ADT7310温度传感器的读取功能,以及如何利用提供的SPI_MASTER.zip资源进行实际操作。 ADT7310是一款高精度、低功耗的数字温度传感器,它可以提供I²C或SPI两种通信协议,允许...
**Android Development Toolkit (ADT) 24.2.0** **一、ADT简介** Android Development Toolkit(ADT)是Google为Eclipse IDE提供的一款强大的插件,它专为Android应用开发设计,极大地简化了Android应用的创建、...
**Android ADT 22版本详解** Android ADT(Android Developer Tools)是Google推出的一款针对Android平台的开发工具,主要用于帮助开发者创建、调试和发布Android应用程序。ADT 22是该工具的一个特定版本,发布于...
Eclipse ADT(Android Developer Tools)是Google提供的一款强大的集成开发环境插件,专为Android应用程序开发设计。这个离线安装包是针对那些无法或不便连接到互联网的开发者,让他们能够在本地环境中安装并使用...
**Android Development Tools (ADT) 20.0.3** ADT,全称为Android Development Tools,是一款针对Android平台开发的必备集成工具,主要用于提升开发者的工作效率和代码质量。ADT作为一个Eclipse插件,提供了丰富的...
在本篇文章中,我们将深入探讨有关 Android Development Tools (ADT) 24.0.2 版本的相关知识点,包括其重要性、新特性、安装步骤以及如何利用百度云盘进行下载等内容。此外,我们还会对 ADT 的基本概念、用途及在 ...
**Android ADT 插件详解** Android Developer Tools (ADT) 是一款专为Eclipse集成开发环境设计的插件,用于支持Android应用的开发。在标题中提到的"ADT-22.0.0.zip"是这个插件的一个特定版本,即版本22.0.0。该版本...
在Android开发领域,ADT(Android Developer Tools)是Google为开发者提供的一款强大的集成开发环境插件,主要用于Eclipse。这个ADT包和Eclipse中安装ADT插件的过程是每个Android初学者必须掌握的基础知识。 首先,...