»
S
I
D
E
B
A
R
«
Exploring CPU design using Haskell
Jul 6th, 2012 by axman6

For some time now, I’ve been thinking about designing my own CPU architecture. Last week, I couldn’t get the thought out of my head, and I finally gave in and started to really think about what I’d want from a moderately simple CPU. I’ve decided that I’m going to document the process as I go, to hopefully force myself to finish this project; I’m often quite bad at starting something I find fun, and losing interest before I get to something I’d call complete. I am hoping to change this… I really hope a future potential employers don’t read this bit…

My aim for this project is to have something I can actually run programs on, and maybe even get an LLVM backend written so I can compile basic C programs for it. I plan to implement all the hardware design using the Haskell library Kansas Lava, which allows for designing hardware which can be both simulated in Haskell (you can play with most circuits in GHCI, which is amazingly nice), as well as produce VHDL which can be synthesised and used to configure things like an FPGA. My goal is to have this design running on my Spartan-3E FPGA Starter Board, or possibly one of these (due to its 64bit wide memory interface). So, on to the design!

Obviously it has to be RISC; my skills in hardware design are rusty enough without me having to figure out how to parse binary data in hardware, and besides, no one uses CISC any more these days, they only pretend. But just saying it’s RISC doesn’t get me far, and I only had a vague idea of what I wanted to be able to do. There were lots of features from ARM that I wanted:

  • (Almost) all instructions are conditional, which can make for some very efficient code, both in terms of speed and space required.

  • Lots of registers. Well, I guess 16 is lots, but I wanted more.

  • Most arithmetic instructions have free shifts on their second argument, so there’s no need for dedicated shift instructions. This is pretty neat, and I decided I wanted more free stuff! We’ll get to that in my next post.

There were also some other architectures that intrigued me, notably SPARC.

  • A register set to constant zero, useful for simplifying many operations. Sometimes you want to perform a computation, but you only care about some of its side effects, such as whether it overflows. You can just use this register as the destination, and the result will be lost, but the side effects wont. Also there’s no need for a negation instruction, since it’s just subtracting x from zero.

  • A register window. On function calls, the registers available to new function are not the same as those of calling function, but there is an overlap of 8 registers, which is where the first 8 (I think) function arguments are passed. When the function returns, the window slides back, and the result will have been passed back in what are now the top 8 registers. I decided this is not a feature I need, because… it seems complicated to implement, and the advantages in such a simple design as mine aren’t really worth it as far as I can tell.

  • A branch delay slot. ARM also has this, but I’d forgotten about it during my initial thinking, and realised it would be a useful thing to have to make implementation easier. Essentially what this means is that when a branch instruction is executed, the instruction immediately following it is executed before the branch actually occurs. In a pipelined architecture, this can be quite useful, it can help avoid pipeline stalls. I think one of the main reasons for them existing is due to the extra time needed to fetch the next instruction in order to execute it. The CPU could either stall for a cycle (or more) waiting for it, or it could do some useful work in the mean time.

So with this I got started. I decided, somewhat arbitrarily, that I wanted this to be a 64 bit architecture, with 64 bit wide instructions. This choice would allow me to have more registers than architectures like ARM. It also meant I had more room for constants in instructions, making a lot of tasks easier, and avoiding memory accesses for almost all constants (I’ve ended up with constants up to 36 bits). Initially I was going to go all out, and have 256 64bit registers, but I figured this was a bit of a waste, and I eventually decided (again, somewhat arbitrarily) on 64 registers (plus some special purpose ones).

I also really wanted conditional execution of instructions, and almost all instructions will be conditional. For those not in the know, this means that the instruction’s result is only executed if certain condition flags are set. This can lead to some extremely efficient branchless code, where in the past you would have had to jump between the two clauses of an if-else statement, now you can just perform the comparison, and have all the instructions from each execute conditionally. Sure you waste a few cycles, but you won’t stall the pipeline, and it usually means having less instructions in the code.

Next I started to think about what operations my CPU would need, and what I would like on top of the basics. A came up with a list of basic arithmetic instructions:

Arithmetic instructions
Instruction Description
add{c}{s} Addition of 64bit two’s complement numbers
sub{c}{s} Subtract
rsub{c}{s} Subtract with arguments reversed. The reason for the inclusion of this will become apparent soon int my next post.
mul{s} Multiply
mula{s} Multiply and accumulate. res = res + a*b
addsat{s} Saturating addition (Because ARM has it, and it seemed nifty)
padd{32,16,8} Parallel addition
psub{32,16,8} Parallel subtraction
pmul{32,16,8} Parallel multiplication
and{s} Bitwise and
or{s} Bitwise or
xor{s} Bitwise exclusive or
nand{s} Bitwise not-and
nor{s} Bitwise not-or
cmp Comparison (cmp rm op2 is an alias for subs r0 rm op2)
max{,32,16,8}{s} Maximum
min{,32,16,8}{s} Minimum

The things in curly brackets are variants of the instruction. Instructions with the {s} variant mean they can set the condition flags based on their result, for example, adds can set the carry flag indicating that the addition had a carry out past the end of the result. Instructions with a {c} variant (ie addc) will perform their operation using the carry flag as an input, in whatever manner makes sense for the given instruction. For example, to add two 128 bit numbers in registers r10, 11 and r20, r21 with the result going into r30, r31, you might use something like:

adds r30, r10, r20; # Add, setting flags (ie carry)
addc r31, r11, r21; # Add, using the previously set carried bit

Instructions with sizes after them, as you might expect, operate on differing sized inputs. padd can add 2×32bit numbers, 4×16bit, or 8×8bit.

Then there’s the branching instructions. Since almost all instructions will be conditional, I only really need two kinds of branches. A standard branch, which covers all types of conditional branches automatically, and some kind of call instruction, which would not only modify the program counter, but also save the return address somewhere. There would also need to be its dual, a return instruction, which sets the program counter to the value that was previously saved.

Branching instructions
Instruction Description
br Normal branch, pc <- src shiftL 3
call Function call, otherwise known as branch and link on ARM. ra <- (pc shiftR 3)+1; pc <- src shiftL 3
ret Function return. pc <- ra shiftL 3

There’s some odd stuff going on here, so I’ll explain. The shifts by three come from me wanting to ensure that instructions are always word aligned. Also doing this means that we can jump to constants 8 times further away than previously possible. In the call instruction, we save the address of the next instruction to the return address register, and set the program counter to the address given to the instruction. Each function is responsible for saving the ra if it’s going to make another function call, and restoring it before returning to its callee.

So far we’ve got enough to be a sorta, kinda, maybe turing complete (assuming infinite registers…) machine, but there’s something quite important missing: memory access. This is something I have less planned out than the other forms of instructions, since I’m not sure what sort of features would be really useful, so more time will have to spent on this before I come up with a final design. What I have so far in terms of instructions are:

Load/Store instructions
Instruction Description
ld{,32,16,8} Load a {64,32,15,8}bit value from memory into a register. rdest <- mem[src]
st{,32,16,8} Store mem[dst] <- rsrc
ldsp{,32,16,8} Load relative to stack pointer (Frame pointer?)
stsp{,32,16,8} Store relative to stack pointer
push{,32,16,8} Push a value onto the stack [sp] <- rsrc; sp <- sp - {8,4,2,1}
pop{,32,16,8} Pop a value off the stack rdest <- [sp]; sp <- sp + {8,4,2,1}

Here we have some pretty standard instruction, though the push and pop are maybe not in some RISC architectures because they’re easy to implement if you have direct access to the stack as a general purpose register. I haven’t decided whether I’ll do this or not, but I think it’s likely, since one day, it might be really useful to be able to swap stacks easily (Maybe it’s a possible security risk… ha, look at me worrying about security risks in a CPU that so far has to ability to run an operating system through lack of interrupts!). I’m also quite sure (thanks to shachaf on IRC) that my definitions for my push and pop instructions are wrong, there needs to be some addition to the stack pointer’s value before referencing it. I may also add a frame pointer to make life easier when working with function calls.

I may also add instructions to save a range of registers to the stack and load them back in, like ARM has (though I have no idea how to implement that just yet)

Lastly, there were some common operations, and some just plain cool ones I wanted to have available:

Instruction Description
ctz Count trailing zeros
ctlz Count leading zeros
popcnt{,32,16,8}{a} Bit population count
rpow2 Round to next power of two
extract Extract a range of bits res = op1[m..n] shift o. This might get removed, and made one of the instruction argument formats.
mor See pages 11 and 12 (physical 16 and 17) of
mxor https://docs.google.com/viewer?url=http://www-cs-faculty.stanford.edu/~uno/fasc1.ps.gz

Many of these instructions are trivial to implement in hardware, but can take many many cycles to implement in software without proper support. I’m open to adding more of these if anyone can come up with some instructions they wish they had in their CPU of choice.

The last thing I wanted to talk about before finishing off this first post was my ideas on what I would do about registers. I mentioned earlier that I likes the SPARC idea of having a constant zero register. After I came up with the idea of the extract instruction, realised that having a register of all 1 bits would also be useful for creating masks. Having this means you could do things like complement all the bits in a certain range like so:

xor r4, r4, r1[7:10]; # Use the constant 1's register to form a mask

I think this will turn out to be extremely useful in many situations.

So far this is what I’ve come up with as a tentative plan for registers:

Register (alt name)
r0 Constant zero register
r1 Constant 0xFFFFFFFFFFFFFFFF
r2-r60 General purpose (Maybe make r60 the frame pointer?)
r61 (sp) Stack pointer
r62 (ra) Return address
r63 (ip) Instruction pointer

I have some ideas about what the calling convention for this architecture should be, but that will have to wait for a later post.

In the coming weeks and months I hope flesh out the details and design of this architecture, and hopefully you’ll find it fun to follow along. I plan to put everything up on github eventually, but I want something more concrete first. My next post will go into more detail about the instruction formats I’ve come up with, as well as my first adventure into Kansas Lava and creating a moderately complex adder/subtracter circuit. Until next time, happy hacking!

OpenCL From Haskell – Hello World!
Dec 17th, 2011 by axman6

It’s been a very long time since I’ve even looked at this blog, so I thought I should do something about that. For the past two days, I’ve been working on making the OpenCLWrappers nee OpenCLRaw package more usable, while fixing some bugs while I’m at it.

The main change I wanted to make was to move from everything returning IO (Either ErrorCode a) or IO (Maybe ErrorCode) to a more useable OpenCL monad. The obvious way to do this is to use ErrorT:

> type OpenCL a = ErrorT ErrorCode IO a

(Be sure to comment out the previous line if you decide to use this is a literate haskell file.)

This involved first converting all the IO (Maybe ErrorCode) functions to IO (Either ErrorCode ()) first, and then implementing the OpenCL monad wrapper on top of that. This has resulted in a new set of modules under System.OpenCL.Monad.

To demonstrate how to make use of this initial work, I’ll use a slightly modified version of the canonical CUDA/OpenCL example which takes two vectors of floats, an adds them. My slight modification is to make the kernel compute the hypotenuse between the two vectors. First let’s start with the OpenCL kernel, which should make more clear what we’re trying to do:

__kernel void vectorHypot(
    __global const float * a,
    __global const float * b,
    __global       float * c)
{
    int nIndex = get_global_id(0);
    c[nIndex] = sqrt(a[nIndex] * a[nIndex] + b[nIndex] * b[nIndex]);
}

Next comes the Haskell code. To make use of this code, you’ll need my latest version of OpenCLWrappers from github.

We start, as with any decent literate haskell document, with various imports to break the flow of the document (note to self, investigate using anansi in the future to see if it makes this easier).

> {-# LANGUAGE BangPatterns #-}
> module Main where
> import System.OpenCL.Monad
> import System.OpenCL.Wrappers.Types
> import System.Random (randoms, mkStdGen)
> import Foreign.Marshal.Array (newArray, peekArray)
> import Foreign.Marshal.Alloc (free)
> import Foreign.Ptr (castPtr, nullPtr, Ptr)
> import Control.Monad (forM, forM_)
> import Data.Bits ((.|.))
> import Data.Time (getCurrentTime, diffUTCTime)

Next, we have a function to time execution times. I’m pretty sure it doesn’t work, so I’d love some suggestions for a better way to do this!

> time :: IO a -> IO a
> time x = do
>     !before <- getCurrentTime
>     !a <- x
>     !after <- getCurrentTime
>     print $ diffUTCTime after before
>     return a
And finally on to the guts of the program.We start by reading in the source for the file. Then we create two lists of len‘ random Float values. I’m sure there are better ways to do this too, but I was after a quick (ha!) and dirty result.

The lists are then written to arrays, which are cast to pointers of () (equivalent to void *), so that it matches the types of required by clCreateBuffer later. Then we run the computation (via runHypot), the arrays are read and freed, and we check to see whether the results differ by much, compared to what we expect.

> len = 2^22 :: Int
> main = do
>     str <- readFile "kernel.cl"
>
>     let a = take len $ randoms (mkStdGen 1) :: [Float]
>         b = take len $ randoms (mkStdGen 2) :: [Float]
>
>     pa' <- newArray a
>     pb' <- newArray b
>     pc' <- newArray (replicate len (0.0 :: Float))
>     psize' <- newArray [len]
>     let pa = castPtr pa' :: Ptr ()
>         pb = castPtr pb' :: Ptr ()
>         pc = castPtr pc' :: Ptr ()
>         psize = castPtr psize' :: Ptr ()
>
>     time $ runHypot str pa pb pc
>
>     cres <- peekArray len pc'
>     free pa'
>     free pb'
>     free pc'
>
>     time $ print
>          $ take 100
>          $ map (\(a,b) -> a-b)
>          $ dropWhile (\(a,b) -> abs (a-b) < 10e-7)
>          $ zipWith3 (\a b c -> (sqrt (aa + bb), c)) a b cres

Now we get to the uh… fun part. It turns out that OpenCL is amazingly tedious for such a simple task. The process of running a kernel is as follows:

  1. Find out about the platforms available
  2. Find out about all the devices you have access to. In my case, on my MacBook Pro I have access to one CPU, and one GPU. This gets printed on the following line.
  3. Select a device to run the computation on. I chose the GPU, mainly because choosing the CPU didn’t work for some reason. I may investigate this in the future
  4. Create an OpenCL context, which is used for all sorts of stuff…
  5. Create a command queue for the device. Each action you wish to perform on the device will be queued here, which including moving data to the device’s memory, running the kernels themselves, and moving data back to the host’s memory
  6. Next the program is created from the course passed in (originally from kernel.cl remember?)
  7. Next we compile the program. You can see I’ve had to jump through some hoops to make this work. I technically could have just run clBuildProgram, but the way I’ve done it allows me to get some info about what went wrong with the compilation. Here I print out the compile/error log returned from the compiler if something goes wrong.
  8. Buffers are created, which will have the contents of the host pointers we allocated and passed as arguments copied into them. This step is what moved the data onto the device.
  9. Finally we get to running the kernel. You may be wondering why I’m using the magic number maxWISize `div` 4 here… I’m using it because it worked. I was hoping that just setting the work item size to maxWISize would work, but for some reason it doesn. I might investigate this later…
  10. Now all that’s left is to read the data back from the device, and then free the memory used on the device also. Once this is done, the pointer pc should contain our results.

> runHypot :: String -> Ptr () -> Ptr () -> Ptr () -> IO (Either ErrorCode ())
> runHypot str pa pb pc = runOpenCL $ do > pids <- clGetPlatformIDs -- 1 > dids <- fmap concat $ forM pids $ \pid -> > clGetDeviceIDs pid clDeviceTypeAll -- 2 > infos <- forM dids $ \did -> > clGetDeviceInfo did clDeviceType > liftIO $ print infos > let devid = dids !! 1 -- 3 > ctx <- clCreateContext [] [devid] Nothing nullPtr -- 4 > queue <- clCreateCommandQueue ctx (dids !! 1) [] -- 5 > > prog <- clCreateProgramWithSource ctx str -- 6 > err <- liftIO $ runOpenCL $ clBuildProgram prog [devid] "" Nothing nullPtr > case err of -- ^ 7 > Left err -> do > x <- clGetProgramBuildInfo prog devid clProgramBuildLog > liftIO $ print x > Right x -> return x > kern <- clCreateKernel prog "vectorHypot" -- 8 > > let bytes = fromIntegral len * 4 -- 9 > pad' <- clCreateBuffer ctx (clMemReadOnly .|. clMemCopyHostPtr) bytes pa > pbd' <- clCreateBuffer ctx (clMemReadOnly .|. clMemCopyHostPtr) bytes pb > pcd' <- clCreateBuffer ctx clMemWriteOnly bytes nullPtr > pad <- liftIO $ newArray [pad'] > pbd <- liftIO $ newArray [pbd'] > pcd <- liftIO $ newArray [pcd'] > clSetKernelArg kern 0 8 $ castPtr pad > clSetKernelArg kern 1 8 $ castPtr pbd > clSetKernelArg kern 2 8 $ castPtr pcd > > (DeviceInfoRetvalCLsizeiList (n:_)) <- > clGetDeviceInfo devid clDeviceMaxWorkItemSizes -- ^ 10 > let maxWISize = fromIntegral n > liftIO $ print maxWISize > eventRun <- > clEnqueueNDRangeKernel queue kern -- 11 > [fromIntegral len] > [fromIntegral maxWISize div 4] [] > > eventRead <- clEnqueueReadBuffer pcd' True 0 bytes -- 12 > pc queue [eventRun] > > clEnqueueWaitForEvents queue [eventRun, eventRead] -- 13 > clReleaseMemObject pad' > clReleaseMemObject pbd' > clReleaseMemObject pcd'

To compile, make sure you call ghc with -lopencl or -framework OpenCL on OS X: ghc -framework OpenCL main.lhs

As you can see, this is a hell of a lot of work to go through for such a simple task, and this is why I hope to make a higher level set of wrappers in the nearish future. I would love to be able to do everything using either Vectors or Repa arrays (the latter would be more ideal). It would also be nice to create a DSL for creating OpenCL kernels, but that’s a long way away at the moment.

I think I’ll focus first on making a cleaner interface to things like attaining a context, and allocating data.

Anyway, that’s it for now, let me know if you have any questions, or is anything doesn’t make sense.

New primitive functions for the Haskell Array library
Jan 23rd, 2011 by axman6

In response to a recent post highlighting some performance problems with arrays in haskell I decided that there are some fairly primitive functions missing in the current array library. My attempt at fixing these issues is now on hackage in the array-utils package. My hope is that some or all of these functions will be added to the array package in GHC 7.2.

The functions I have implemented basically try to remove as much bounds checking as possible, so the implementation of these functions all use the unsafeRead, unsafeWrite and unsafeIndex functions to help avoid extra overhead. Some of the functions that are included are:

updateElems :: (MArray a e m, Ix i) => (e -> e) -> a i e -> m ()

Which updates every element in the array with the given function.

updateElemsM :: (MArray a e m, Ix i) => (e -> m e) -> a i e -> m ()

the monadic version

updateElemsIx :: (MArray a e m, Ix i) => (i -> e -> e) -> a i e -> m ()

also provides the index to the update function. There’s also a monadic version of this.

updateWithin :: (MArray a e m, Ix i) => (e -> e) -> (i,i) -> a i e -> m ()

Which updates every element in the line/rectangle/prism defined by the start and end indexes.

updateOn :: (MArray a e m, Ix i) => (e -> e) -> [i] -> a i e -> m ()

Which updates the given indices.

updateSlice :: (MArray a e m, Ix i) => (e -> e) -> (i,i) -> a i e -> m ()

Which updates every element from the start index until the end index, so every element in the flat array from start to end.

Update: The difference between updateWithin and updateSlice is that if you have a 2D array with indices from (1,1) to (10,10) and you say updateSlice (+10) ((2,5),(4,2)) arr, then it will add 10 to all elements whose index is between index ((1,1),(10,10)) (2,5) which is 14 and index ((1,1),(10,10)) (4,2) which is 35. So it will update elements 5 to 10 on row 2, 1 to 10 on row 3, and 1 to 2 on row 4. If you used updateWithin here, it wouldn’t update anything, because range ((2,5),(4,2)) returns an empty list. I might do another post with images to help clear this up.

All functions in the module use Int based indexing and unsafe functions internally to hopefully speed up the code that’s generated.

I’m yet to benchmark these functions and see whether they would make any difference to the results of the above article (I doubt they’d be any faster than the Ptr versions). Whether they are faster or not, they should hopefully save a fair amount of code for a lot people that’s easy to get wrong. When I do benchmark these, I’ll add the results to this blog.

Speaking of getting it wrong, while I am fairly confident, I haven’t fully tested these functions yet, so if you feel they would be useful to you, and you run into strange results, I would love to know about it! I’m hoping to figure out how to get quickcheck to run some tests, and hopefully I’ll have that done next weekend.

If you can think of any more functions you think should be in the array package, please let me know, and I’ll see if I can add them. All the code is available on GitHub.

Co-routines in Haskell
Jul 21st, 2010 by blackh

It is easy to implement co-routines in Haskell… but only if you know how. No fewer than three people asked me to blog about it, so here’s a quick guide to rolling your own co-routines. To understand this blog, you will need to have a basic understanding of monad transformers.

There are co-routine packages on Hackage, but I have not had much luck with them. The point here, really, is to show you how it all works.

What’s a co-routine?

A co-routine (called a ‘generator’ in Python) is where you create two interleaved flows of control on a single thread. Unlike threads, co-routines switch co-operatively using a ‘yield’ operation. (This is quite a good trick in GUI programming for implementing complex workflow that spans multiple GUI events, since most GUI libraries require everything to be on one thread.)

The example I’m presenting here works in this way: A caller executes a CoroutineT monad transformer, which adds the ‘yield’ operation to the underlying monad (which can be anything). From the caller’s point of view, the ‘yield’ looks like the monad has returned, but with a continuation. In the callee, ‘yield’ appears to block until the caller executes the continuation. In addition, we add the ability to pass a value in both directions.

So we’ve inverted the flow of control in the callee. Continuation passing style (CPS) can also do this, but co-routines are better than CPS because 1. it’s a bit neater, and 2. it allows for recursion.

One application of co-routines is to separate I/O from logic. By way of example I am going to implement an expert system for identifying fruit. I try not to use contrived examples, and as you can see, this time I have completely failed.

In this example, the CoroutineT sits on top of the identity monad, so it’s pure, but the approach works just the same on top of IO or anything else. This example is not deeply nested, but this approach happily supports any level of recursion or nesting.

So here’s our expert system logic. We’ll define CoroutineT shortly. You can read ‘yield’ as ‘askUser’:

type Question = String
data Answer = Y | N deriving Eq
type Expert a = CoroutineT Answer Question Identity a
data Fruit
    = Apple
    | Kiwifruit
    | Banana
    | Orange
    | Lemon
    deriving Show
identifyFruit :: Expert Fruit
identifyFruit = do
    yellow <- yield "Is it yellow?"
    if yellow == Y then do
        long <- yield "Is it long?"
        if long == Y then
            return Banana
          else
            return Lemon
      else do
        orange <- yield "Is it orange?"
        if orange == Y then
           return Orange
         else do
           fuzzy <- yield "Is it fuzzy?"
           if fuzzy == Y then
               return Kiwifruit
             else
               return Apple

Our ‘Expert’ type above…

type Expert a = CoroutineT Answer Question Identity a

…specifies the type we are sending into our co-routine (Answer) and the type we are getting out of it (Question) as viewed from the caller.

Now we just need a main program to drive it. Because the I/O is separated out, we can later replace this with a nice touch-screen GUI for the seriously fruit-impaired.

main :: IO ()
main = do
    putStrLn $ "Expert system for identifying fruit"
    run identifyFruit
  where
    run :: Expert Fruit -> IO ()
    run exp = handle $ runIdentity $ runCoroutineT exp
    handle (Yield q cont) = do
        putStrLn q
        l <- getLine
        case map toLower l of
            "y"   -> run $ cont Y
            "yes" -> run $ cont Y
            "n"   -> run $ cont N
            "no"  -> run $ cont N
            _   -> putStrLn "Please answer 'yes' or 'no'" >> handle (Yield q cont)
    handle (Result fruit) = do
        putStrLn $ "The fruit you have is: "++show fruit

When we run our co-routine, it returns with one of these two events:

  • Yield, which happens when yield is executed. It gives us the output value (Question) and the continuation, which when passed the input value (Answer) gives us the same ‘Expert’ type we started with.
  • Result, which happens when the co-routine has finished executing.

So how does CoroutineT work?

We’ll start with the types:

data Result i o m a = Yield o (i -> CoroutineT i o m a) | Result a
-- | Co-routine monad transformer
--
--   * i = input value returned by yield
--
--   * o = output value, passed to yield
--
--   * m = next monad in stack
--
--   * a = monad return value
data CoroutineT i o m a = CoroutineT {
        runCoroutineT :: m (Result i o m a)
    }

Hopefully that’s pretty straightforward. ‘yield’ is defined like this:

-- | Suspend processing, returning a @o@ value and a continuation to the caller
yield :: Monad m => o -> CoroutineT i o m i
yield o = CoroutineT $ return $ Yield o (\i -> CoroutineT $ return $ Result i)

The key point here is that the continuation does nothing except return the value, which is what we want it to do when we run a monad that contains only a yield.

Most of the magic is in the definition of >>=, thus:

instance Monad m => Monad (CoroutineT i o m) where
    return a = CoroutineT $ return $ Result a
    f >>= g = CoroutineT $ do
        res1 <- runCoroutineT f
        case res1 of
            Yield o c -> return $ Yield o (\i -> c i >>= g)
            Result a  -> runCoroutineT (g a)
    -- Pass fail to next monad in the stack
    fail err = CoroutineT $ fail err

A typical monad would normally execute f then pass its result to g and execute that, and this is in fact exactly what we do in the Result case. Ho hum.

But there’s no law that says you have to execute g. This is Haskell so we can do whatever we like. g is just a plain old closure representing the continuation.

So what we do in the Yield case is take the continuation that executing f gave us, and bind that to the continuation g, then bail out of the monad (in the same way ErrorT does when it gets an error), handing our constructed continuation to the caller. So we end up with a closure that represents the entire execution state of the monad, and it doesn’t matter how deeply nested we are. It just puts the continuation together in the right way as we unravel everything on our way back to the caller.

Here’s the code in downloadable form:

This code is released in the public domain.

Stephen Blackheath, Manawatu, New Zealand

AusHac2010 Day 2 progress
Jul 17th, 2010 by axman6
Day 2 of AusHac2010 is coming to an end, and we’ve made a lot of progress:
Bernie Pope has been making great progress with a new MPI binding for Haskell
Ben Lippmeier, Erik de Castro Lopo and Ben Sinclair have been busily hacking on DDC, with 13 commits today alone
Stephen Blackheath has been working on some code using the Accelerate library that rasterises triangles for use in a commercial computer game.
Hamish Mackenzie, Jens Petersen and Matthew Sellers have been working on better Yi integration for Leksah, working on using Yi’s current configuration file, and improving “launch experience”, focusing on eliminating the requirement of creating an initial workspace file.
Lang Hames has been using his experience with LLVM from working at Apple as an intern to improve various low level problems in LLVM. His work should help resolve some of the problems the LLVM backend to GHC has, but should also be very beneficial to many other LLVM users. While doing this, he’s written a very nice tool that illustrates register liveness, with further work focusing on colouring the HTML output to show register pressure. The LLVM guys seem quite excited about this work, which is great.
Mark Wotton and Sohum Banerjea have been trying to extend Hubris, the Haskell-Ruby bridge, to work with polymorphic functions. Their heads are quite sore from all the head banging. Raphael Speyer has been working on an install script to make installation much easier for users.. but only if you use Ubuntu so far.
Ivan Miljenovic has been prematurely optimising his containers library, before finalising the API. This library is designed to let library writers leave the choice of which container data structure to output to the library consumer as well as making it easier to change which data structure you want to use in your code, with minimal code change. See his blog post for more details. **********
Trevor McDonell has been working on the CUDA backend to Accelerate, adding support for efficient nested tuple types, and other bug fixes. Sean Lee has been helping out with testing of this code, along with Manuel Chakravarty.
With one more full day to go, I think we’ll be getting a lot of awesome work done tomorrowQ!

Day 2 of AusHac2010 is coming to an end, and we’ve made a lot of progress:

Bernie Pope has been making great progress with a new MPI binding for Haskell

Ben Lippmeier, Erik de Castro Lopo and Ben Sinclair have been busily hacking on DDC, with 13 commits today alone

Stephen Blackheath has been working on some code using the Accelerate library that rasterises triangles for use in a commercial computer game.

Hamish Mackenzie, Jens Petersen and Matthew Sellers have been working on better Yi integration for Leksah, working on using Yi’s current configuration file, and improving “launch experience”, focusing on eliminating the requirement of creating an initial workspace file.

Lang Hames has been using his experience with LLVM from working at Apple as an intern to improve various low level problems in LLVM. His work should help resolve some of the problems the LLVM backend to GHC has, but should also be very beneficial to many other LLVM users. While doing this, he’s written a very nice tool that illustrates register liveness, with further work focusing on colouring the HTML output to show register pressure. The LLVM guys seem quite excited about this work, which is great.

Mark Wotton and Sohum Banerjea have been trying to extend Hubris, the Haskell-Ruby bridge, to work with polymorphic functions. Their heads are quite sore from all the head banging. Raphael Speyer has been working on an install script to make installation much easier for users.. but only if you use Ubuntu so far.

Ivan Miljenovic has been prematurely optimising his containers library, before finalising the API. This library is designed to let library writers leave the choice of which container data structure to output to the library consumer as well as making it easier to change which data structure you want to use in your code, with minimal code change. See his blog post for more details.

Trevor McDonell has been working on the CUDA backend to Accelerate, adding support for efficient nested tuple types, and other bug fixes. Sean Lee has been helping out with testing of this code, along with Manuel Chakravarty.

With one more full day to go, I think we’ll be getting a lot of awesome work done tomorrow!

AusHac2010 Day 1 progress
Jul 17th, 2010 by axman6

So, the first half day of AusHac2010 was yesterday. We had about 12 people turn up, which isn’t too bad for a Friday.

Erik de Castro Lopo did a lot of work on Ben Lippmeier’s DDC compiler for his Disciple language.
There was some initial work on the Accelerate library for accelerated array computations in Haskell, using various backends. Most of the current work is aiming at making the CUDA backend usable, after which more backends will likely be added, such as an LLVM backend, and possibly an OpenCL backend as well.

Due to the restricted time yesterday, not all that much work was started, but day 2 (see my next post!) has been much more productive.

Chunked XML parsing is the latest thing, you know
May 15th, 2010 by blackh

Uhh, hello.  Welcome to my first blog post ever – and thanks Axman6 for letting me be a “guest blogger”.

It’s rather unfashionable on #haskell, but I like XML.  So, 18 months ago, I took over the hexpat package from Evan Martin.  It was going to be a small project – a simple XML parser binding to Expat.  The fastest Haskell XML parser alive.  Or so I thought.

It’s become a passion, a way of life.  It’s XML parsing in Haskell the way I think it should be done.  The best as well as the fastest.  (I like to think big.)

I’ve finally finished adding all the features that I and a number of contributors wanted, and I would now like to announce that hexpat is going beta.  I want to make this package really, really good, so please help by testing and critiquing.  I want to stabilize hexpat, but hexpat-iteratee will be unstable for a while yet.

The future is chunky

The cherry on top of the hexpat galaxy is the still experimental hexpat-iteratee based on Oleg Kiselyov’s iteratee, which is a bit of a hot ticket these days.  It provides lazy XML parsing without the practical issues and philosophical dodginess inherent in Haskell’s lazy I/O through functions like hGetContents.

hexpat-iteratee allows for effectful XML processing done in a functional way, and the magic behind this is Yair Chuchem’s humbly named List package.  It is “merely” a generalization of lists, and I think it deserves to be a common piece of infrastructure.

The example project is a chunked XML-over-TCP movie database lookup server.  Every home should have one.  So, let’s start like all good blogs do, with imports:

{-# LANGUAGE OverloadedStrings #-}
import Control.Concurrent
import Control.Exception
import Control.Monad
import Control.Monad.IO.Class
import Control.Monad.ListT
import qualified Data.ByteString as B
import qualified Data.ByteString.Unsafe as B (unsafeUseAsCStringLen)
import Data.Iteratee
import Data.Iteratee.IO.Fd
import Data.Iteratee.WrappedByteString
import Data.List.Class as List
import Data.Maybe
import Data.Text (Text)
import qualified Data.Text as T
import Network
import System.IO
import System.Posix.IO (handleToFd, fdWriteBuf, closeFd)
import System.Posix.Types (Fd)
import Text.XML.Expat.Chunked
import qualified Text.XML.Expat.Chunked as Tree
import Text.XML.Expat.Format
import Foreign.Ptr
The first thing we want to do is listen on a socket.  I could use handles, sockets, or file descriptors. With handles, this code does not work interactively. Disabling the buffering does not seem to work at all in GHC 6.10 or 6.12. Sockets would be ideal, but to save me writing an iteratee driver, I’m left with file descriptors which unfortunately means this code only works on GHC 6.12 on a POSIX system.  fdPutStrBS is the only glue I need then – it writes a ByteString to a Fd.   Here’s the code:
main :: IO ()
main = do
    let port = 6333
    putStrLn $ "listening on port "++show port
    ls <- listenOn $ PortNumber port
    forever $ do
        (h, _, _) <- accept ls
        forkIO $ handleToFd h >>= \fd -> do
            iter <- parse defaultParserOptions (session (fdPutStrBS fd))
            result <- enumFd fd iter >>= run
            print result
          `finally`
            closeFd fd

fdPutStrBS :: Fd -> B.ByteString -> IO () fdPutStrBS fd bs = B.unsafeUseAsCStringLen bs $ \(buf, len) -> writeFully (castPtr buf) (fromIntegral len) where writeFully _ len | len == 0 = return () writeFully buf len = do written <- fdWriteBuf fd buf len if written < 0 then fail "write failed" else writeFully (buf `plusPtr` fromIntegral written) (len - written)

Once we’ve accepted the connection, we get parse (from hexpat-iteratee) to make us an iteratee.  The second argument, “session (fdPutStrBS fd)” is the handler for processing the document.  We then pass this iteratee to iteratee’s enumFd, whose job it is to pull the input data out of the Fd and feed it into the parser. parse is monadic in order that it can start the handler before it receives the first data block through the iteratee. This is necessary in case the handler wants to generate output before it gets any input, which we want to do here.

The handler is a co-routine.  When it runs out of input data, it gets suspended, and control returns to enumFd.

session :: (B.ByteString -> IO ())  -- ^ Write output data to socket
        -> ListOf (UNode IO Text)   -- ^ Input XML document
        -> XMLT IO ()
session writeOut inputXML = do
    let outputXML = formatG $ indent 2 $ Element "server" [] (processRoot inputXML)
    execute $ liftIO . writeOut =<< outputXML
    return ()
formatG is a hexpat function to take a tree node and format it as XML, returning one of Yair’s Lists of ByteStrings.  indent is a filter that adds pretty indenting.  The Element is the top level tag of our output XML tree, and its third argument “processRoot inputXML” evaluates the child nodes of the output document.  The entire processing of the document is in a functional style.

execute here makes all the IO actually happen. It iterates over a List of monadic actions and sequences them. This translates into a sequence of writes of data blocks to the socket.  The elements in the list are monadic, so execute also must execute those in order to extract each output ByteString.

In this way, even though processRoot is pure at the top level, it can contain effectful computations.

processRoot :: ListOf (UNode IO Text) -> ListOf (UNode IO Text)
processRoot root = do
    Element _ _ children <- genericTake 1 root
    child <- children
    extractElements child
  where
    extractElements :: UNode IO Text -> ListOf (UNode IO Text)
    extractElements elt | isElement elt = processCommand elt `cons` mzero
    extractElements _                   = mzero
ListOf is a type function that conceals a long-winded type name.  This function maps the input document to a list of output nodes.

The root of the input document is actually given as a List containing one item – the top-level XML tag.  The reason why we do this is so that we have to ask for it to be pulled.  If it were just passed as a UNode IO Text type, we would have to calculate it before the handler was called, and the handler wouldn’t get a chance to do output before it requests input.

The function is implemented using List’s Monad instance, which behaves exactly like a list monad.  The reason for genericTake 1 root is so we stop processing after the root node and don’t wait for a node that will never come.  I need to fix this in hexpat-iteratee.

`cons` is the generalized list cons operator like : and  `mzero` corresponds to [].

processCommand :: UNode IO Text -> UNode IO Text
processCommand elt@(Element "title" _ _) = Element "title" [] $ joinL $ do
    txt <- textContentM elt
    return $ search txt
processCommand (Element cmd _ _) = Element "unknown" [("command", cmd)] mzero
Here is our command processor.  We have one command <title>foo</title> that finds all movies whose titles contain foo.

joinL is a bit of List magic that lets you drop down into the underlying monad, which in this case is XMLT IO a.  joinL’s type is :: ItemM l (l a) -> l a where ItemM l is a type function giving the list’s monad.  So, the stuff after joinL resolves to a type of :: XMLT IO (ListOf (UNode IO Text)).

search :: Text -> ListOf (UNode IO Text)
search key = joinL $ do
    iter <- liftIO $ parse defaultParserOptions $ \root -> do
        let l = do
                elt@(Element _ _ children) <- genericTake 1 root
                movie <- List.filter isElement children
                return movie
        execute l
        return l
    eMovies <- liftIO $ fileDriver iter "movies.xml"
    case eMovies of
        Left err -> fail $ "failed to read 'movies.xml': "++show err
        Right movies -> return $ List.filter matches movies
  where
    matches elt = key `T.isInfixOf` fromMaybe "" (getAttribute elt "title")
Here’s where our handler does some real I/O.  We read our database from a flat file using the same method of parsing.  Passing possibly unexecuted nodes outside the XMLT monad is a bit wrong, and needs to be addressed in the design, but here it works as long as I execute them.  Alternatively a pure XML parse would work.  hexpat has functions to convert between pure and monadic node types.

So, I build and run the server, and here is the result, using Unix’s nc command as my client.  I typed this:

<a>
<title>of the</title>
The output is:
<?xml version="1.0" encoding="UTF-8"?>
<server>
 <title>
   <movie id="dvzrwfvryd" disc="41" title="War of the Worlds (2005)"
        director="Steven Spielberg" genre="Sci Fi Thriller" rating="6"
        description="Tom Cruise alert" imdbID="tt0407304"/>
   <movie id="xxvjgxpokp" disc="44" title="Shaun of the Dead"
        director="Edgar Wright" genre="Comedy Horror" rating="8"
        description="British send-up zombie movie" imdbID="tt0365748"/>
   <movie id="duvcjsygqi" disc="104" title="March of the Penguins (La Marche de l&apos;empereur)"
        director="Luc Jacquet" genre="Documentary" description="" imdbID="tt0428803"/>
   <movie id="dawcezoiro" disc="109" title="Pirates of the Caribbean: Dead Man&apos;s Chest"
        director="Gore Verbinski" genre="Action/Comedy" rating="7" description="" imdbID="tt0383574"/>
 </title>
(New lines added for readability)

And the session can process more commands interactively.

And pickled

I should also mention my related hexpat-pickle package which is a shameless rip-off of the picklers from Uwe Schmidt’s excellent hxt package.  I find it a very practical and quick way to bang out XML picklers.  (It doesn’t work with hexpat-iteratee yet.)

Bye bye

Here’s the code in downloadable form.  Make sure you use the monads-fd and transformers packages instead of mtl.  Also hexpat-iteratee and text.

I hope you found this interesting.  I hope the XML haters of #haskell will be miraculously transformed into XML tolerators, and I hope you’ll help me improve hexpat. – Stephen Blackheath, Manawatu, New Zealand

AusHac2010: The inaugural Australian Haskell Hackathon!
Mar 23rd, 2010 by Axman6

Over the last week or so, Ivan Miljenovic and I have been busy organising AusHac2010. We’ve made a lot of progress, and are announcing the dates as the 16th-18th of July. If you’d like to come along and work on projects like:

  • The LLVM backend to GHC
  • the Accelerate, a Haskell EDSL for regular array computations, using various backends (CUDA, OpenCL, LLVM etc.)
  • Hubris, the Haskell-Ruby binding
  • Leksah, the Haskell IDE written in Haskell
  • MPI bindings

then please put your name down on the sign up page.

This should be a great opportunity for Aussie (and non aussie!) haskell hackers to come and meet all those people you know from Planet Haskell and #haskell, and give something back to the community, while having a great time.

Hope to see you there, – Alex Mason

A small follow up
Jan 8th, 2010 by Axman6

In my previous post about why I love the cereal package, I went through the development of a bencoding parser and encoder. Brian was kind enough to point out some of the flaws I’d made in this code (which I should add had been caused from me not actually checking the spec while writing the code, obviously a bad idea), and from these comments, I think I’ve managed to fix most of the problems:

Hi, thanks for writing this stuff. I think it could be pretty cool, but it could benefit from more precise reading and implementation of the spec.

For example, bencoded integers can be negative.

Also, my alarms go off whenever I see ‘read’. In ‘getBString’, you pass ‘read count’ to ‘getByteString’, which expects Int. But check, e.g., ‘read (show $ 2^64-1) :: Int’ in ghci. So if the torrent data is malformed, you could end up passing a negative length to ‘getByteString’. Maybe it knows how to deal with that, but it’s not something you should rely on.

You also have to decide what to do about dictionaries you read whose keys aren’t in order, etc.

Basically, please be more precise, especially if you put this on Hackage. This stuff is supposed to be industrial strength. Thanks.

The first problem, not handling negative integers was pretty trivial to fix, all I needed to do was check to see if there was a ‘-’ char out the front, and if not, just get all the digits, and then read them:

-- | Parses a BInt
getBInt :: Get BCode
getBInt = BInt . read <$> getWrapped 'i' 'e' intP
    where intP = ((:) <$> char '-' <*> getDigits) <|> getDigits

Brian also pointed out something I also wasn’t particularly happy with, the use of read to read in an Int64. This should under normal circumstances be more than large enough to read any bytestring that should be in any bencoded data (.torrent files are usually less than 1-200KB), so we should never have run into a problem here, but it’s still good to make sure we can be ‘industrial strength’:

-- | Parses a BString
getBString :: Get BCode
getBString = do
    count <- getDigits
    BString <$> ( char ':' *> getStr (read count :: Integer))
    where maxInt = fromIntegral (maxBound :: Int) :: Integer
          getStr n | n >= 0 = B.concat <$> (sequence $ getStr' n)
                   | otherwise = fail $ "read a negative length string, length: " ++ show n
          getStr' n | n > maxInt = getByteString maxBound : getStr' (n-maxInt)
                    | otherwise = [getByteString . fromIntegral $ n]

Here you can see we’re now using an Integer as the read value, and taking chunks of maxBound :: Int bytes, until there are less than that many bytes left to fetch.

I’ve decided to ignore the problem with dictionaries with out of order elements, I can see this being something others may have overlooked in their implementations, and it’s entirely possible that other implementations do not put the keys in the right order. Our implementation does, but can easily handle malformed implementations. I see this is a bonus, and I hope others do too (I feel the code is more robust, and that’s always good).

I hope this has made some difference to the code, and what people think of it.

Until next time,

– Axman

Why I love Cereal
Jan 5th, 2010 by Axman6

Cereal, as you may know from my previous posts is a library for parsing binary data from strict ByteStrings. It is very similar to the binary package, but importantly, provides both an Alternative instance, and an Either String a return type for the decode function, which tells you where the parse failed.

I’ve been playing around with cereal lately in jlouis’ haskell-torrent project, rewriting the various binary parsing and producing parts of the program (the torrent file parser, and the wire protocol parser). I though it would be nice to share some of the code used for these, to demonstrate how easy cereal makes it to do such things.

To begin with, I’ll show you the part that decodes and encodes torrent files (if needed in the future). Torrent files are encoded using a very simple encoding, known as bencoding, which consists of four major primitives: Integral numbers, Strings of bytes, Arrays of bencoded objects, and Dictionaries of String, bencoded object pairs. This is very nicely represented using this datatype:

-- | BCode represents the structure of a bencoded file
data BCode = BInt Integer                       -- ^ An integer
           | BString B.ByteString               -- ^ A string of bytes
           | BArray [BCode]                     -- ^ An array
           | BDict (M.Map B.ByteString BCode)   -- ^ A key, value map
  deriving Show
the specification for bencoded data goes something like this:

Integers are encoded as the ASCII character for ‘i’ as a byte, followed by the digits of the integral value, terminated by the ASCII byte for ‘e’.
Eg: the number ‘42’ would be encoded as “i42e”
Strings are encoded as the digits of their length, followed by a colon (‘:’), then the bytes of the string. these strings are really just byte sequences, and probably shouldn’t be treated as having an encoding (as jlouis and I found out when I tried to test the current code on GHC 6.12.1, with the BString type using Strings, instead of ByteStrings, and finding out that the simple test contained byte sequences that could not be represented as Strings).
Eg: the string “hello” would become “5:hello”, “hello world” would become “11:hello world”
Arrays are encoded as ASCII ‘l’ (for list I believe), followed by any number of bencoded objects, terminated by an ASCII ‘e’. (This is where using binary became difficult, as you had to explicitly check whether you had reached the terminating ‘e’ using lookAhead when parsing before attempting to parse another bencoded object, du the the lack of actual failure handling)
Eg: ["Hello", 123] would become “l5:helloi123ee”. Notice how we’ve used the previous definitions for integral numbers, and strings.
Dictionaries are encoded as an ASCII ‘d’, followed by the String, object pairs, followed by an ASCII ‘e’.
Eg: fromList [("test",123),("arr",[1,2,"hello"])] would become “d4:testi123e3:arrli1ei2e5:helloee”.
It looks a bit of a mess, but it is quite efficient.

Encoding

When writing my Serialize instance (Cereal’s version of the Binary class) for the BCode type, I decided it would be much easier to write the put methods first. This turned out to be rather straight forward, once I’d written a few helper functions.

toW8 :: Char -> Word8
toW8 = fromIntegral . ord

fromW8 :: Word8 -> Char fromW8 = chr . fromIntegral

toBS :: String -> B.ByteString toBS = B.pack . map toW8

fromBS :: B.ByteString -> String fromBS = map fromW8 . B.unpack

-- | Put an element, wrapped by two characters wrap :: Char -> Char -> Put -> Put wrap a b m = do putWord8 (toW8 a) m putWord8 (toW8 b)

-- | Put something as it is shown using @show@ putShow :: Show a => a -> Put putShow x = mapM_ put (show x)

With these in hand, I set to work implementing the put function. The Integer and Array functions were straight forward:
instance Serialize BCode where
     put (BInt i)     = wrap 'i' 'e' $ putShow i
     put (BArray arr) = wrap 'l' 'e' . mapM_ put $ arr
The Dictionary and String implementations weren’t too bad either:
    put (BDict mp)   = wrap 'd' 'e' dict
                      where dict = mapM_ encPair . M.toList $ mp
                            encPair (k, v) = put (BString k) >> put v
     put (BString s)  = do
                          putShow (B.length s)
                          putWord8 (toW8 ':')
                          putByteString s
As you can see, the code is quite clear, and matches the specification quite well.

Decoding

Parsing the data was the next step. this proved a little more difficult, but with my recent (shallow) experience with Parsec, I knew what was needed.

I decided to start by writing some useful combinators (this is a lie, I wrote them when needed, but lying makes the post flow better >_>). These included the following:

-- | Get a Char. Only works with single byte characters
 getCharG :: Get Char
 getCharG = fromW8 <$> getWord8

-- | Parse a given character char :: Char -> Get () char c = do x <- getCharG if x == c then return () else fail $ "Expected char: '" ++ c:"' got: '" ++ [fromW8 x,'\'']

-- | Get something wrapped in two Chars getWrapped :: Char -> Char -> Get a -> Get a getWrapped a b p = char a > p < char b -- The same as char a >> p >>= \x -> char b >> return x

-- | Parse zero or items using a given parser many :: Get a -> Get [a] many p = many1 p mplus return []

-- | Parse one or more items using a given parser many1 :: Get a -> Get [a] many1 p = (:) <$> p <*> many p

-- | Returns a character if it is a digit, fails otherwise. uses isDigit. digit :: Get Char digit = do x <- getCharG if isDigit x then return x else fail $ "Expected digit, got: " ++ show x

-- | Get one or more digit characters getDigits :: Get String getDigits = many1 digit

My favourite two definitions here are many and many1, which nicely show the use of Alternative: they are mutually recursive, with many1 being the only one of the two to actually do and parsing, while many checks to see if many1 failed to parse one object using the parser p. It’s really quite beautiful, and makes the code that follows a hell of a lot nicer to write. This is where the love mentioned in the title comes in by the way.

With these in hand, I could now go ahead and write the actual parsers for various BCode types. Parsing BInts and BArrays is dead simple now:

-- | Parses a BInt
 getBInt :: Get BCode
 getBInt = BInt . read <$> getWrapped 'i' 'e' getDigits

-- | Parses a BArray getBArray :: Get BCode getBArray = BArray <$> getWrapped 'l' 'e' (many get)

As as side note, I’ve now come to see just what the folks on #haskell were on about when they said Applicative is nice. I think I’ve fallen in love (yet again!).

BStrings were a little more difficult, but not hard, given what I’ve just written:

-- | Parses a BString
 getBString :: Get BCode
 getBString = do
     count <- getDigits
     BString <$> ( char ':' *> getByteString (read count))
Here we get as many digits as we can, followed by a colon, and then take the number of bytes the digits specified. Finally, we have the BDict definition, which also is quite nice, if slightly annoying with its use of pattern matching (don’t get me wrong, i love pattern matching, but it’s the only place it’s used in the parser :( )
-- | Parses a BDict
 getBDict :: Get BCode
 getBDict = BDict . M.fromList <$> getWrapped 'd' 'e' (many getPairs)
     where getPairs = do
             (BString s) <- getBString
             x <- get
             return (s,x)
Putting it all together, we finally have a definition for the get function in the Serialize class.
    get = getBInt <|> getBArray <|> getBDict <|> getBString
A rather clean, elegant, and hopefully correct serialiser and deserialiser for the bencoded format used in torrent files. I’m considering releasing this code as a separate package on hackage, but I’m still not sure how widely it might be used. I have a string feeling that that would not be very wide at all, but that a library of more advanced combinators for cereal would make life a lot easier for others like me who have some strange binary formats that need to be parsed in an efficient manner.

Please, I implore you, do let me know what you think of this all, I’m always interested in seeing what others think of my code, and ways to improve it.

Until next time,

— Axman

»  Substance: WordPress   »  Style: Ahren Ahimsa
© Alex Mason (Axman6) 2009