A small follow up
January 8th, 2010 by Axman6

In my previous post about why I love the cereal package, I went through the development of a bencoding parser and encoder. Brian was kind enough to point out some of the flaws I’d made in this code (which I should add had been caused from me not actually checking the spec while writing the code, obviously a bad idea), and from these comments, I think I’ve managed to fix most of the problems:

Hi, thanks for writing this stuff. I think it could be pretty cool, but it could benefit from more precise reading and implementation of the spec.

For example, bencoded integers can be negative.

Also, my alarms go off whenever I see ‘read’. In ‘getBString’, you pass ‘read count’ to ‘getByteString’, which expects Int. But check, e.g., ‘read (show $ 2^64-1) :: Int’ in ghci. So if the torrent data is malformed, you could end up passing a negative length to ‘getByteString’. Maybe it knows how to deal with that, but it’s not something you should rely on.

You also have to decide what to do about dictionaries you read whose keys aren’t in order, etc.

Basically, please be more precise, especially if you put this on Hackage. This stuff is supposed to be industrial strength. Thanks.

The first problem, not handling negative integers was pretty trivial to fix, all I needed to do was check to see if there was a ‘-’ char out the front, and if not, just get all the digits, and then read them:

-- | Parses a BInt
getBInt :: Get BCode
getBInt = BInt . read <$> getWrapped 'i' 'e' intP
    where intP = ((:) <$> char '-' <*> getDigits) <|> getDigits

Brian also pointed out something I also wasn’t particularly happy with, the use of read to read in an Int64. This should under normal circumstances be more than large enough to read any bytestring that should be in any bencoded data (.torrent files are usually less than 1-200KB), so we should never have run into a problem here, but it’s still good to make sure we can be ‘industrial strength’:

-- | Parses a BString
getBString :: Get BCode
getBString = do
    count <- getDigits
    BString <$> ( char ':' *> getStr (read count :: Integer))
    where maxInt = fromIntegral (maxBound :: Int) :: Integer
          getStr n | n >= 0 = B.concat <$> (sequence $ getStr' n)
                   | otherwise = fail $ "read a negative length string, length: " ++ show n
          getStr' n | n > maxInt = getByteString maxBound : getStr' (n-maxInt)
                    | otherwise = [getByteString . fromIntegral $ n]

Here you can see we’re now using an Integer as the read value, and taking chunks of maxBound :: Int bytes, until there are less than that many bytes left to fetch.

I’ve decided to ignore the problem with dictionaries with out of order elements, I can see this being something others may have overlooked in their implementations, and it’s entirely possible that other implementations do not put the keys in the right order. Our implementation does, but can easily handle malformed implementations. I see this is a bonus, and I hope others do too (I feel the code is more robust, and that’s always good).

I hope this has made some difference to the code, and what people think of it.

Until next time,

– Axman

Leave a Reply

»  Substance: WordPress   »  Style: Ahren Ahimsa
© Alex Mason (Axman6) 2009