IIUC, the following makes 2 additional copies of the vector, one at the freeze and one at thaw. It works, but I suspect it's inefficient. Can someone suggest improvements or simplifications?
NB : newtype Vec = Vec (Ptr Vec) deriving Storable, and the associated entry in an inline-c Context, represents the type of a pointer to an opaque C data structure and vecGetArray/vecRestoreArray let us operate on a contiguous memory entry representation of the struct's contents.
Thanks !import qualified Data.Vector.Storable as V import qualified Data.Vector.Storable.Mutable as VM withVecGetVectorM :: Vec -> (V.Vector PetscScalar_ -> IO (V.Vector PetscScalar_)) -> IO (V.Vector PetscScalar_) withVecGetVectorM v f = do p <- vecGetArrayPtr v pf <- newForeignPtr_ p vImm <- V.freeze (VM.unsafeFromForeignPtr0 pf len) vImmOut <- f vImm vMutOut <- V.thaw vImmOut let (fpOut, _, _) = VM.unsafeToForeignPtr vMutOut pOut = unsafeForeignPtrToPtr fpOut vecRestoreArrayPtr v pOut return vImmOut where len = vecSize v
Edit: I just noticed that, while the returned Vector is correct, when I use the resulting, modified Vec (the "side effect"), after returning from this function, it is not modified. GHC doesn't recompute it (laziness?). How do I make it recompute it? What idiomatic way is there in Haskell to work with mutable data across the FFI?
I'm asking because the C API lets one modify the array via the pointer before vecRestore. This bit of code would appear within e.g. optimization loops.submitted by ocramz
[link] [1 comment]
Back in April I found myself in a need for typed holes in Template Haskell. To my disappointment it turned out that typed holes are not implemented in TH. Sadly, this happens too often: a feature is added to GHC but no Template Haskell support is implemented for it. This was the time when I was working on injective type families and I already had some experience in extending TH implementation. I figured that adding support for typed holes should be a trivial task, no more than 30 minutes of coding. I created a feature request on Trac and started coding. I quickly realized that it won’t be that simple. Not that the amount of required work was that extensive. I simply tripped over the way GHC handles names internally. As a result the work got stalled for several months and I only finished it two weeks ago thanks to help from Richard Eisenberg.
My patch allows you to do several interesting things. Firstly, it allows to quote typed holes, ie. expressions with name starting with an underscore:[d| i :: a -> a i x = _ |]
This declaration quote will represent _ using an UnboundVarE constructor. Secondly, you can now splice unbound variables:i :: a -> a i x = $( return $ VarE (mkName "_") ) j :: a -> a j x = $( return $ UnboundVarE (mkName "_") )
Notice that in a splice you can use either VarE or UnboundVarE to represent an unbound variable – they are treated the same.
A very important side-effect of my implementation is that you can actually quote unbound variables. This means that you can now use nested pattern splices, as demonstrated by one of the tests in GHC testsuite:baz = [| \ $( return $ VarP $ mkName "x" ) -> x |]
Previously this code was rejected. The reason is that:
- nested pattern splice is not compiled immediately, because it is possible that it refers to local variables defined outside of the bracket;
- the bracket is renamed immediately at the declaration site and all the variables were required to be in scope at that time.
The combination of the above means that the pattern splice does not bring anything into scope (because it is not compiled until the outer bracket is spliced in), which lead to x being out of scope. But now it is perfectly fine to have unbound variables in a bracket. So the above definition of baz is now accepted. When it is first renamed x is treated as an unbound variable, which is now fine, and when the bracket is spliced in, the inner splice is compiled and it correctly brings binding for x into scope. Getting nested pattern splices to work was not my intention when I started implementing this patch but it turned out we essentially got this feature for free.
One stumbling block during my work was typed Template Haskell. With normal, untyped TH I can place a splice at top-level in a file:$$(return [ SigD (mkName "m") (ForallT [PlainTV (mkName "a")]  (AppT (AppT ArrowT (VarT (mkName "a"))) (VarT (mkName "a")))) , FunD (mkName "m") [Clause [VarP (mkName "x")] (NormalB (VarE (mkName "x")))  ] ])
and this will build a definition that will be spliced into the source code. But converting this into a typed splice, by saying $$(return ...., resulted in compiler panic. I reported this as #10945. The reason turned out to be quite tricky. When Template Haskell is enabled, top-level expressions are allowed. Each such expression is treated as an implicit splice. The problem with typed TH splice is that it doesn’t really make sense at the top-level and it should be treated as an implicit splice. Yet it was treated as an explicit splice, which resulted in a panic later in the compiler pipeline.
Another issue that came up with typed TH was that typed holes cannot be quoted, again leading to panic. I reported this as #10946. This issue has not yet been solved.
The above work is now merged with HEAD and will be available in GHC 8.0.
Following the successful first meeting of the South of England Regional Programming Languages Seminar (S-REPLS) in Cambridge earlier this year, we are delighted to announce the second meeting in the series to be held on Friday 20th November at Middlesex University, London, organised by Andrei Popescu, Jaap Boender and Raja Nagarajan.
S-REPLS is a regular and informal meeting for those based in the South of England with a professional interest—whether it be academic or commercial—in the semantics and implementation of programming languages. Following the format of the first meeting a blend of contributed and invited talks will be offered. The invited speaker for the upcoming meeting is Philip Wadler of the University of Edinburgh.
We are now actively soliciting contributed talks on any subject related to programming languages, their semantics and implementation. To propose a talk, please e-mail Andrei Popescu at firstname.lastname@example.org with your name, affiliation, and proposed talk title with short abstract.
Attendance at the meeting is free, and lunch is supplied free of charge, though we do ask that notice is given to Andrei if you plan to attend to ensure adequate catering supplies are ordered. Please also notify Andrei of any special dietary requirements.
Many readers of /r/haskell from industry and academia attended (and some gave very interesting talks) at the first meeting. We hope that they and many more will attend the second meeting, following the success of the first!
To keep up to date on any developments, please also subscribe to the S-REPLS mailing list, at: www.jiscmail.ac.uk/srepls.submitted by dmulligan
Summary: One of HLint's rules reduced sharing in the presence of view patterns. Lambda desugaring and optimisation could be improved in GHC.
HLint has the rule:function x = \y -> body
function x y = body
Given a function whose body is a lambda, you can use the function syntactic sugar to move the lambda arguments to the left of the = sign. One side condition is that you can't have a where binding, for example:function x = \y -> xx + y
where xx = trace "hit" x
This is equivalent to:function x = let xx = trace "hit" x in \y -> xx + y
Moving a let under a lambda can cause arbitrary additional computation, as I previously described, so is not allowed (hint: think of map (function 1) [2..5]).View Patterns
One side condition I hadn't anticipated is that if x is a view pattern, the transformation can still reduce sharing. Consider:function (trace "hit" -> xx) = \y -> xx + y
This is equivalent to:function x = case trace "hit" x of xx -> \y -> xx + y
And moving y to the right of the = causes trace "hit" to be recomputed for every value of y.
I've now fixed HLint 1.9.22 to spot this case. Using Uniplate, I added the side condition:null (universeBi pats :: [Exp_])
Specifically, there must be no expressions inside the pattern, which covers the PViewPat constructor, and any others that might harbour expressions (and thus computation) in future.
The problem with function definitions also applies equally to \p1 -> \p2 -> e, which cannot be safely rewritten as \p1 p2 -> e if p1 contains a view pattern.
Pattern synonyms make this problem worse, as they can embody arbitrary computation in a pattern, which is lexically indistinguishable from a normal constructor. As an example:pattern Odd <- (odd -> True)
f Odd = 1
f _ = 2
However, putting complex computation behind a pattern is probably not a good idea, since it makes it harder for the programmer to understand the performance characteristics. You could also argue that using view patterns and lambda expressions to capture computation after partial application on definitions then lambda expressions is also confusing, so I've refactored Shake to avoid that.Potential Fixes
I think it should be possible to fix the problem by optimising the desugaring of functions, ensuring patterns are matched left-to-right where allowable, and that each match happens before the lambda requesting the next argument. The question is whether such a change would improve performance generally. Let's take an example:test [1,2,3,4,5,6,7,8,9,10] x = x
test _ x = negate x
Could be changed to:test [1,2,3,4,5,6,7,8,9,10] = \x -> x
test _ = trace "" $ \x -> negate x
Which goes 3-4x faster when running map (test [1..]) [1..n] at -O2 (thanks to Jorge Acereda Maciá for the benchmarks). The trace is required to avoid GHC deoptimising the second variant to the first, as per GHC bug #11029.
There are two main downsides I can think of. Firstly, the desugaring becomes more complex. Secondly, these extra lambdas introduce overhead, as the STG machine GHC uses makes multiple argument lambdas cheaper. That overhead could be removed using call-site analysis and other optimisations, so those optimisations might need improving before this optimisation produces a general benefit.
the codeprop_ShowReadAssoc :: Expr -> Bool prop_ShowReadAssoc a = readExpr (show a) == Just (assoc a) assoc :: Expr -> Expr assoc (Add (Add a b) c) = assoc (Add a (Add b c)) assoc (Add a b) = Add (assoc a) (assoc b) assoc (Mul (Mul a b) c) = assoc (Mul a (Mul b c)) assoc (Mul a b) = Mul (assoc a) (assoc b) assoc (Sin a) = undefined assoc (Cos a) = undefined assoc (Var) = (Var) assoc a = a arbExpr :: Int -> Gen Expr arbExpr s = frequency [ (1, do n <- arbitrary return (Num n)) , (s, do a <- arbExpr s' b <- arbExpr s' return (Add a b)) , (s, do a <- arbExpr s' b <- arbExpr s' return (Mul a b)) , (s, do n <- arbExpr s' return (Sin n)) , (s, do n <- arbExpr s' return (Cos n)) , (s, do n <- arbExpr s' return Var) ] where s' = (s `div` 2) instance Arbitrary Expr where arbitrary = sized arbExpr
but it fails for sin and cos. Also I dont want it to genereate negative numbers but Im not sure how to do that.
Any tips or help?submitted by Kablaow
[link] [6 comments]
I've recently discovered that bytestring builders can be incredibly inefficient for creating small bytestrings. While looking for ways around it I thought about using something like the following type during the construction of the bytestring:type AlternativeBuilder = MonoidTree ByteString data MonoidTree a = Pure a | Empty | Append (MonoidTree a) (MonoidTree a) instance Monoid (MonoidTree a) where mempty = Empty mappend = Append
As you can see, this type abstracts over the Monoid operations allowing to either postpone the actual concatenation until the moment when all chunks are aggregated and we have access to info on how many bytes to allocate for the output bytestring, or to stream the chunks without any concatenation and allocation. The appending on this type is evidently an O(1) boxing operation. It is general enough to be used over Text or Vector.
That all leaves me wondering, why the authors of "bytestring" implemented Builder the way they did and what the downsides of what I am considering here might be.
Also, since MonoidTree is a quite general abstraction, is something like that already provided by some existing package?submitted by nikita-volkov
[link] [32 comments]
Conventional wisdom tells us that LLVM's support for garbage collection isn't expressive enough to satisfy the needs of a purely functional language like Haskell. Still, GHC has an LLVM backend that seems to be pretty good and everyone is pretty happy with it. Are there any documents explaining how this backend works? Which of LLVM's features, if any, does the code generator leverage for GC functionality?submitted by theseoafs
[link] [15 comments]
vim-hsimport is a vim plugin for extending the import list of a Haskell source file.
This version is mostly about making it easier to install and configure, and with the next to be released version of hdevtools it should better handle cabal settings and even support stackdan00
[link] [2 comments]
At the moment the repository/project is called haskell-ide.
This leads to the impression that it is an IDE.
It is actually a backend/server to provide services for an IDE.
We are considering changing the name.
The options are
Please cast your vote at http://strawpoll.me/5842105
Otherwise provide alternatives in the commentssubmitted by alan_zimm
[link] [22 comments]