# News aggregator

### C9 Lectures: Dr. Erik Meijer - Functional Programming Fundamentals Chapter 1 of 13 - YouTube

### Haskell Job @ Luminal

Hi all - I'm working with Luminal, Inc. on a product called Fugue. It has a growing Haskell component and Luminal is looking for people who'd like to work on it.

Fugue is very cool. Its goal is to realize the ideal of 'immutable infrastructure'; if you want to upgrade a server, you don't touch the existing server itself - you put a new one in its place with the required upgrades sitting nicely in place. Immutability is a property that Haskellers know and love.

Broadly, Fugue provides a way to declare and deploy infrastructure that it then manages. It does this in a way that i) does not impact performance negatively, ii) generates desirable security properties, and iii) provides opportunities for cost optimization and savings.

You can check out more about Fugue at https://fugue.it.

Fugue is a pretty interesting project to work on. It encompasses distributed systems, security, prediction, optimization, and operation in adversarial environments. Of particular interest to Haskell folk is a component of Fugue called Ludwig - a declarative, statically-typed DSL that aims to duplicate the Haskell ideal of 'if it compiles, it probably works' for infrastructure. If a Ludwig infrastructure declaration compiles, your infrastructure should 'just work' as expected.

The primary languages used at Luminal are Python and Go, but the Ludwig compiler is written in Haskell, and internal tools written in Haskell are also starting to appear.

If you might be interested in working on Ludwig and Fugue, feel free to check out the official job ad. If you would consider yourself an intermediate+ level Haskeller with some visible open-source contributions/blog content out there or professional Haskell experience under your belt, I'd especially love to hear from you.

Cheers!

submitted by jaredtobin[link] [18 comments]

### Neil Mitchell: Refactoring with Equational Reasoning

*Summary: Haskell is great for refactoring, thanks to being able to reason about and transform programs with confidence.*

I think one of Haskell's strengths as a practical language is that it's easy to refactor, and more importantly, easy to refactor safety. Programs in the real world often accumulate technical debt - code that is shaped more by its history than its current purpose. Refactoring is one way to address that technical debt, making the code simpler, but not changing any meaningful behaviour.

When refactoring, you need to think of which alternative forms of code are equivalent but better. In C++ I've removed unused variables, only to find they were RAII variables, and their mere presence had a semantic effect. In R I've removed redundant if expressions, only to find the apparently pure condition had the effect of coercing a variable and changing its type. In Haskell, it's equally possible to make refactorings that at first glance appear safe but aren't - however, in practice, it happens a lot less. I think there are a few reasons for that:

- Haskell is pure and evaluation strategy is largely unobservable - moving a statement "before" or "after" another lexically is usually safe.
- Refactorings that do go wrong, for example variables that accidentally get moved out of scope or types which are no longer as constrained, usually result in compiler errors.
- The Haskell community cares about semantics and laws. The Monad laws are satisfied by almost all monads, flagrantly breaking those laws is rare.
- Functions like unsafePerformIO, which could harm refactoring, are almost always used behind a suitable pure abstraction.

Note that these reasons are due to both the language, and the conventions of the Haskell community. (Despite these factors, there are a few features that can trip up refactorings, e.g. exceptions, record wildcards, space-leaks.)

To take a very concrete example, today I was faced with the code:

f = fromMaybe (not b) . selectif f v == b then opt1 else opt2

At one point the function f was used lots, had a sensible name and nicely abstracted some properties. Now f is used once, the semantics are captured elsewhere, and the code is just unclear. We can refactor this statement, focusing on the condition:

f v == b-- inline f

(fromMaybe (not b) . select) v == b

-- remove brackets and inline (.)

fromMaybe (not b) (select v) == b

-- expand to a case statement

(case select v of Nothing -> not b; Just x -> x) == b

-- push the == down

case select v of Nothing -> not b == b; Just x -> x == b

-- simplify not b == b

case select v of Nothing -> False; Just x -> x == b

-- collapse back up

select v == Just b

And now substitute back in:

if select v == Just b then opt1 else opt2Our code is now much simpler and more direct. Thanks to the guarantees I expect of Haskell programs, I also have a high degree of confidence this code really is equivalent - even if it isn't obvious just looking at beginning and end.

### Hoersten — Modern Emacs Haskell-Mode

### ERDI Gergo: Typed embedding of STLC into Haskell

Someone posted to the Haskell subreddit this blogpost of Lennart where he goes step-by-step through implementing an evaluator and type checker for CoC. I don't know why this post from 2007 showed up on Reddit this week, but it's a very good post, pedagogically speaking. Go and read it.

In this post, I'd like to elaborate on the simply-typed lambda calculus part of his blogpost. His typechecker defines the following types for representing STLC types, terms, and environments:

data Type = Base | Arrow Type Type deriving (Eq, Show) type Sym = String data Expr = Var Sym | App Expr Expr | Lam Sym Type Expr deriving (Eq, Show)The signature of the typechecker presented in his post is as follows:

type ErrorMsg = String type TC a = Either ErrorMsg a newtype Env = Env [(Sym, Type)] deriving (Show) tCheck :: Env -> Expr -> TC TypeMy approach is to instead create a representation of terms of STLC in such a way that only well-scoped, well-typed terms can be represented. So let's turn on a couple of heavy-weight language extensions from GHC 7.8 (we'll see how each of them is used), and define a typed representation of STLC terms:

{-# LANGUAGE GADTs, StandaloneDeriving #-} {-# LANGUAGE DataKinds, KindSignatures, TypeFamilies, TypeOperators #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TemplateHaskell #-} -- sigh... import Data.Singletons.Prelude import Data.Singletons.TH import Data.Type.Equality -- | A (typed) variable is an index into a context of types data TVar (ts :: [Type]) (a :: Type) where Here :: TVar (t ': ts) t There :: TVar ts a -> TVar (t ': ts) a deriving instance Show (TVar ctx a) -- | Typed representation of STLC: well-scoped and well-typed by construction data TTerm (ctx :: [Type]) (a :: Type) where TConst :: TTerm ctx Base TVar :: TVar ctx a -> TTerm ctx a TLam :: TTerm (a ': ctx) b -> TTerm ctx (Arrow a b) TApp :: TTerm ctx (Arrow a b) -> TTerm ctx a -> TTerm ctx b deriving instance Show (TTerm ctx a)The idea is to represent the context of a term as a list of types of variables in scope, and index into that list, de Bruijn-style, to refer to variables. This indexing operation maintains the necessary connection between the pointer and the type that it points to. Note the type of the TLam constructor, where we extend the context at the front for the inductive step.

To give a taste of how convenient it is to work with this representation programmatically, here's a total evaluator:

-- | Interpretation (semantics) of our types type family Interp (t :: Type) where Interp Base = () Interp (Arrow t1 t2) = Interp t1 -> Interp t2 -- | An environment gives the value of all variables in scope in a given context data Env (ts :: [Type]) where Nil :: Env '[] Cons :: Interp t -> Env ts -> Env (t ': ts) lookupVar :: TVar ts a -> Env ts -> Interp a lookupVar Here (Cons x _) = x lookupVar (There v) (Cons _ xs) = lookupVar v xs -- | Evaluate a term of STLC. This function is total! eval :: Env ctx -> TTerm ctx a -> Interp a eval env TConst = () eval env (TVar v) = lookupVar v env eval env (TLam lam) = \x -> eval (Cons x env) lam eval env (TApp f e) = eval env f $ eval env eOf course, the problem is that this representation is not at all convenient for other purposes. For starters, it is certainly not how we would expect human beings to type in their programs.

My version of the typechecker is such that instead of giving the type of a term (when it is well-typed), it instead transforms the loose representation (Term) into the tight one (TTerm). A Term is well-scoped and well-typed (under some binders) iff there is a TTerm corresponding to it. Let's use singletons to store type information in existential positions:

$(genSingletons [''Type]) $(singDecideInstance ''Type) -- | Existential version of 'TTerm' data SomeTerm (ctx :: [Type]) where TheTerm :: Sing a -> TTerm ctx a -> SomeTerm ctx -- | Existential version of 'TVar' data SomeVar (ctx :: [Type]) where TheVar :: Sing a -> TVar ctx a -> SomeVar ctx -- | A typed binder of variable names data Binder (ctx :: [Type]) where BNil :: Binder '[] BCons :: Sym -> Sing t -> Binder ts -> Binder (t ': ts)Armed with these definitions, we can finally define the type inferer. I would argue that it is no more complicated than Lennart's version. In fact, it has the exact same shape, with value-level equality tests replaced with Data.Type.Equality-based checks.

-- | Type inference for STLC infer :: Binder ctx -> Term -> Maybe (SomeTerm ctx) infer bs (Var v) = do TheVar t v' <- inferVar bs v return $ TheTerm t $ TVar v' infer bs (App f e) = do TheTerm (SArrow t0 t) f' <- infer bs f TheTerm t0' e' <- infer bs e Refl <- testEquality t0 t0' return $ TheTerm t $ TApp f' e' infer bs (Lam v ty e) = case toSing ty of SomeSing t0 -> do TheTerm t e' <- infer (BCons v t0 bs) e return $ TheTerm (SArrow t0 t) $ TLam e' inferVar :: Binder ctx -> Sym -> Maybe (SomeVar ctx) inferVar (BCons u t bs) v | v == u = return $ TheVar t Here | otherwise = do TheVar t' v' <- inferVar bs u return $ TheVar t' $ There v' inferVar _ _ = NothingNote that pattern matching on Refl in the App case brings in scope type equalities that are crucial to making infer well-typed.

Of course, because of the existential nature of SomeVar, we should provide a typechecker as well which is a much more convenient interface to work with:

-- | Typechecker for STLC check :: forall ctx a. (SingI a) => Binder ctx -> Term -> Maybe (TTerm ctx a) check bs e = do TheTerm t' e' <- infer bs e Refl <- testEquality t t' return e' where t = singByProxy (Proxy :: Proxy a) -- | Typechecker for closed terms of STLC check_ :: (SingI a) => Term -> Maybe (TTerm '[] a) check_ = check BNil(The SingI a constraint is an unfortunate implementation detail; the kind of a is Type, which is closed, so GHC should be able to know there is always going to be a SingI a instance).

To review, we've written a **typed** embedding of
STLC into Haskell, with a total evaluator and a typechecker, in
about 110 lines of code.

If we were doing this in something more like Agda, one possible improvement would be to define a function untype :: TTerm ctx a -> Term and use that to give check basically a type of Binder ctx -> (e :: Term) -> Either ((e' :: TTerm ctx a) -> untype e' == e -> Void) (TTerm ctx a), i.e. to give a proof in the non-well-typed case as well.

### Problem with FFI and JHC

Hi everyone,

i'm willing to use FFI calling C-functions from Haskell and compiling Haskell with jhc.

So i tried this tutorial to have a little introduction to FFI.

{-# INCLUDE <math.h> #-} {-# LANGUAGE ForeignFunctionInterface #-} import Foreign.C.Types -- C types foreign import ccall "math.h sin" c_sin :: CDouble -> CDouble test :: Double -> Double test n = realToFrac (c_sin (realToFrac n)) main = print (test 0.5)and i compile my *test.hs* with:

But i have a realToFrac error

What: failure

Why: context reduction, no instance for: >Jhc.Class.Real.Fractional Jhc.Type.C.CDouble

Where: on line 9 in test.hs

in the explicitly typed Main.test Main.1_n

= Jhc.Num.realToFrac (Main.c_sin (Jhc.Num.realToFrac >Main.1_n)) {- on line 9 -}

So i tried writing a little C-files containing:

#include "test.h" int test(int n){ return n+1; }and his *test.h* associated

and i replace in my *test.hs*:

with:

foreign import ccall "test.h test" c_test :: CInt -> CInt test :: Integer -> Integer test n = fromIntegral(c_test (fromIntegral(n)))and still compiling with

and i got:

/tmp/jhc_NawnRL/main_code.c:9:18: fatal error: test.h: No >such file or directory

# include <test.h>

_ ^

compilation terminated.

Exiting abnormally. Work directory is '/tmp/jhc_NawnRL'

jhc: user error (C code did not compile.)

So i'm a little lost, i tried this too:

foreign import ccall "$fullpath/test.h " c_test :: CInt -> CIntand jhc tell me my C-code cant compile.

Do i need to add something to line of compilation?

Thanks to anybody that take the time to read this, any help appreciated.

submitted by superancetre[link] [8 comments]

### JP Moresmau: Genetic evolution of a neural network

As a implementation of a genetic algorithm, I use simple-genetic-algorithm, that is, ahem, simple to use compared to some other packages. I've modified it, though, to use MonadRandom instead of having to thread through the RandomGen, so if you want to run the code make sure to take the version from my fork in the MonadRandom branch, until these changes are released by the maintainer on Hackage.

To use a genetic algorithm, we need two things: a representation of the structure we want to evolve that is suitable for the genetic transformations, and the implementation of these transformations (crossover and mutation) themselves.

For the first task, we convert the network to simple vectors:

-- | Convert a network for only vectors structures

toVectors :: (Floating (Vector a), Container Vector a, Floating a) =>

Network a -> [([Vector a],Vector a)]

toVectors = map (tovs . layerToShowable) . layers

where

tovs (ShowableLayer ws bs) = (toRows ws,bs)

-- | Convert back to a network given a network definition and vector structures

fromVectors :: (Floating (Vector a), Container Vector a, Floating a) =>

LayerDefinition a -> [([Vector a],Vector a)] -> Network a

fromVectors ld = Network . map ((\sl-> showableToLayer (sl,ld)) . frvs)

where

frvs (vs,v)=ShowableLayer (fromRows vs) v

Note that we need a LayerDefinition to rebuild the network from the Vectors. Currently each layer of the network has the same definition and the algorithm does NOT evolve this structure, only the weights and biases.

We're going to keep that information along with some Worlds that we use for fitness testing:

-- | Store the network information as vectors

data NetworkData = NetworkData [([Vector Float],Vector Float)] [World]

deriving (Show,Read,Eq,Ord)

Then we can implement the Chromosome class. Crossover takes the average of all weights and biases of the two parents, mutation changes one value randomly. Of course other implementations could be found.

-- | Chromosome instance

instance Chromosome NetworkData where

-- Take average

crossover (NetworkData xs ws) (NetworkData ys _) =

return [ NetworkData (Prelude.zipWith zipW xs ys) ws]

where

zipW (vs1,v1) (vs2,v2) = (Prelude.zipWith zipW1 vs1 vs2,zipW1 v1 v2)

zipW1 = V.zipWith (\x y -> (x + y) / 2)

-- Mutate one weight randomly

mutation (NetworkData xs ws) = do

xs' <- font="" r1="" randomchange="" xs="">

return $ NetworkData xs' ws

where

randomChange _ [] = return []

randomChange f xs2 = do

idx <- -="" 1="" font="" getrandomr="" length="" xs2="">

mapM (\(i,x)->if i==idx then f x else return x) $ zip [0..] xs2

r1 (vs,v) = do

(v3:vs2) <- font="" r2="" randomchange="" v:vs="">

return (vs2,v3)

r2 v2 = do

idx2 <- -="" 1="" font="" getrandomr="" v.length="" v2="">

dx <- 20="" font="" getrandomr="" nbsp="">

return $ v2 V.// [(idx2,dx)]

-- calculate fitness

fitness (NetworkData xs ws) = sum (map (fitWorld xs) ws) / fromIntegral (length ws)

For the fitness function we calculate the fitness for each given world and average it. I'm not trying to be too clever with that code.

For each world we run the food searching algorithm from each corners, and evaluate how far we are from the target, and if we reached it how long it took us. So networks that find the food will always rank higher than the ones who don't, and the quicker among them will rank higher again.

-- | Calculate fitness on a given world

fitWorld :: [([Vector Float],Vector Float)] -> World -> Double

fitWorld dat w = sum (map fitCorner $ corners $ wSize w) / 4

where

fitCorner pos =

let network = fromVectors layerDef dat

poss = algSteps w (neuralAlg w network) 50 pos

endSmell = currentSmell w $ last poss

possLength = length poss

in fitFormula w endSmell possLength (distance pos $ wFood w)

-- | Fitness formula

fitFormula :: World -> Int -> Int -> Int -> Double

fitFormula w endSmell possLenth dist = case fromIntegral endSmell / fromIntegral (wSmell w) of

1 -> 2 + (fromIntegral dist / fromIntegral possLenth)

n -> n

Then we just need a stop condition: either a maximum number of generations or reaching the maximum possible fitness (shortest path found from all corners)

-- | Maximum for the fitness

maxFit :: Double

maxFit = fitFormula (World undefined undefined 10 undefined) 10 10 10

-- | Stop function

stopf :: NetworkData -> Int -> IO Bool

stopf nd gen= return $ gen > 300 || fitness nd == maxFit

And code to generate random networks

-- | Build a random network data

buildNetworkData :: (Monad m,RandomGen g) => [World] -> RandT g m NetworkData

buildNetworkData ws= do

g <- font="" getsplit="">

let n = buildNetwork g

return $ NetworkData (toVectors n) ws

We can then evolve our population of networks, say on two worlds, w and w2:

runGAIO 64 0.1 (buildNetworkData [w,w2]) stopf

And we get after a couple of minutes a network that can find the food from another point that the tested corners:

Iteration:1###########........##........##........##...X....##........##........##........##.......@##........##........###########Iteration:2###########........##........##........##...X....##........##........##......@.##........##........##........###########Iteration:3###########........##........##........##...X....##........##.....@..##........##........##........##........###########Iteration:4###########........##........##........##...X....##....@...##........##........##........##........##........###########Iteration:5###########........##........##........##...@....##........##........##........##........##........##........###########Yummy!

Interested in all this? I recommend the ai-junkie tutorial, then! My code is of course on github. Feedback and suggestions welcome!

### stuck at the first part of chapter 1 of the CIScourse

### Help making this function point-free?

Hi all,

I'm working through the NICTA course, and I've run into something I don't understand. I have an implementation of find for Lists:

find :: (a -> Bool) -> List a -> Optional a find p x = headOr Empty $ map Full $ filter p xBut when I try to make it point-free (just removing args p and x), I get errors:

[ 5 of 28] Compiling Course.List ( src/Course/List.hs, interpreted )

src/Course/List.hs:268:3: Couldn't match expected type (a -> Bool) -> List a -> Optional a' with actual typeOptional a0' Relevant bindings include find :: (a -> Bool) -> List a -> Optional a (bound at src/Course/List.hs:267:1) In the expression: headOr Empty $ map Full $ filter In an equation for `find': find = headOr Empty $ map Full $ filter

src/Course/List.hs:268:29: Couldn't match expected type List a0' with actual type(a1 -> Bool) -> List a1 -> List a1' Probable cause: filter' is applied to too few arguments In the second argument of($)', namely filter' In the second argument of($)', namely `map Full $ filter' Failed, modules loaded: Course.Core, Course.Id, Course.Optional, Course.Validation.

Note: Optional is basically Maybe, with Empty and Full instead of Nothing and Just. headOr returns the head of a List or the value given if the list is empty.

Can anyone shed some light on this for me?

submitted by katamarimasu[link] [19 comments]