# News aggregator

### Generating compiler back ends at the snap of a finger

The paper:

Resourceable, Retargetable, Modular Instruction Selection Using a Machine-Independent, Type-Based Tiling of Low-Level Intermediate Code

Ramsey and Dias have a series of papers about making it ever easier to generate compiler backends, and the claim is that they produce decent code to boot. I wonder if this stuff has/will show up in compilers I can use? (Or, do you think it not actually matter, for some pragmatic reason or other?)

Abstract: We present a novel variation on the standard technique of selecting instructions by tiling an intermediate-code tree. Typical compilers use a different set of tiles for every target machine. By analyzing a formal model of machine-level computation, we have developed a set of tiles that is machine-independent while retaining the expressive power of machine code. Using this tileset, we reduce the number of tilers required from one per machine to one per architectural family (e.g., register architecture or stack architecture). Because the tiler is the part of the instruction selector that is most difficult to reason about, our technique makes it possible to retarget an instruction selector with significantly less effort than standard techniques. Retargeting effort is further reduced by applying an earlier result which generates the machine-dependent implementation of our tileset automatically from a declarative description of instructions' semantics. Our design has the additional benefit of enabling modular reasoning about three aspects of code generation that are not typically separated: the semantics of the compiler's intermediate representation, the semantics of the target instruction set, and the techniques needed to generate good target code.

### A Good Symbol for $

I use Vim Haskell Conceal+ and I was thinking about customizing it to conceal $, seeing as how it is a commonly used operator and I sorta wanted to convey the fact that it is asymetrical (like >>= does a good job of). Do you have any ideas for a good symbol? I suppose I should do & from Data.Function while I'm at it.

submitted by AlephOmega1[link] [34 comments]

### [TFP'15] call for participation

### [TFP'15] call for participation

### Haskell Weekly News

### Hypothetical lens xml parsing that documents errors?

I've been using the hexpat-lens library to parse XML, and it's really wonderful. I can write functions like this:

readPlugins :: ByteString -> [Plugin] readPlugins xml = map fromJust $ -- Revisit this decision foreach (xml ^.. xmlText ./ named "Report" ./ named "ReportHost" ./ named "ReportItem") $ \node -> Plugin <$> node ^? ix "pluginID" . textIntPrism <*> node ^? ix "severity" . textIntPrism . intSeverityPrism <*> node ^? ix "pluginFamily" <*> node ^? id ./ named "fname" ./ text <*> node ^? id ./ named "description" ./ textThis gets all the ReportItem nodes (well, the ones at the right place in the tree) and extracts relevant information from child nodes and attributes. This is a cool and useful way to query xml. Additionally, it is easy to write. Now, what it doesn't do well is error handling. You might say "duh, you're using fromJust", but even if you switched out map fromJust with sequence (and made the function's return type be Maybe [Plugin], you'll still get absolutely no information about why a document fails to parse.

So, there are sort of two questions I'm getting at, the first being: Is there a way to use the Prism optic to capture failure information? If I compose a bunch of Prisms and then run the resulting Prism, is there a way to get an indication of which step failed.

The second seems like a more difficult, but possibly orthogonal, question. Is there a way to track state inside of any of the optics? I ask because the kind of error information that I'm really looking for would be more like this:

Report[0]:ReportHost[0]:ReportItem[54]:fname not foundAnd so the optics would have to be passing around information about the context. Any thoughts would be appreciated.

submitted by andrewthad[link] [11 comments]

### Why is Functor used on Lens?

This is how you define a simple _1 Lens and use it:

data Const a b = Const { getConst :: a } deriving Show instance Functor (Const a) where fmap f (Const a) = Const a _1 f (l,r) = fmap (\ x -> (x,r)) (f l) view lens thing = getConst $ lens Const thing main = do print $ view (_1._1) ((1,2),2)But Functor is used just as a trick to define a way to combine the setter and the getter. Isn't that overkill / much less readable than just passing comb as a parameter?

_1 f (left,right) comb = comb (f left comb) (\ x -> (x,right)) view lens thing = lens const thing const main = do print $ view (_1._1) ((1,2),2) submitted by SrPeixinho[link] [4 comments]

### Looking for Haskell developer / data strucutures researcher (part-time, remote)

I'm searching for associate for researching data structures and developing code generation system in Haskell.

Occupancy: ~ 10 hrs / week. Rate: 100-150 USD / week (subject of discussion).

Required:

- being very interested in the project. I think it is possible, because I used to develop this project as a hobby. You might (and should) make this project the subject of your next course project / thesis in your high school.

Great if you know:

- Haskell well (monad transformers, GADTs, type families)
- Java, at least a little
- how does CPU work (pipelining, branch prediction, caches)

I guess this job is primarily interesting for students on their 1st-3rd year.

Reach me at: leventov.ru@gmail.com (Google hangout chat, preferred) leventov@ya.ru (e-mail)

submitted by leventov[link] [22 comments]

### Roles, GND, Data.Coerce

### Introduction to Agda (two talk videos)

### Danny Gratzer: Bracket Abstraction: The Smallest PL You've Ever Seen

It’s well known that lambda calculus is an extremely small, Turing Complete language. In fact, most programming languages over the last 5 years have grown some (typed and or broken) embedding of lambda calculus with aptly named lambdas.

This is wonderful and everything but lambda calculus is actually a little complicated. It’s centred around binding and substituting for variables, while this is elegant it’s a little difficult to formalize mathematically. It’s natural to wonder whether we can avoid dealing with variables by building up all our lambda terms from a special privileged few.

These systems (sometimes called combinator calculi) are quite pleasant to model formally, but how do we know that our system is complete? In this post I’d like to go over translating any lambda calculus program into a particular combinator calculus, SK calculus.

What is SK Combinator Calculus?SK combinator calculus is a language with exactly 3 types of expressions.

- We can apply one term to another, e e,
- We have one term s
- We another term k

Besides the obvious ones, there are two main rules for this system:

- s a b c = (a c) (b c)
- k a b = a

And that’s it. What makes SK calculus so remarkable is how minimal it is. We now show that it’s Turing complete by translating lambda calculus into it.

Bracket AbstractionFirst things first, let’s just define how to represent both SK calculus and lambda calculus in our Haskell program.

data Lam = Var Int | Ap Lam Lam | Lam Lam data SK = S | K | SKAp SK SKNow we begin by defining a translation from a simplified lambda calculus to SK calculus. This simplified calculus is just SK supplemented with variables. By defining this step, the actual transformation becomes remarkably crisp.

data SKH = Var' Int | S' | K' | SKAp' SKH SKHNote that while SKH has variables, but no way to bind them. In order to remove a variable, we have bracket. bracket has the property that replacing Var 0 in a term, e, with a term, e', is the same as SKAp (bracket e) e'.

-- Remove one variable bracket :: SKH -> SKH bracket (Var' 0) = SKAp' (SKAp' S' K') K' bracket (Var' i) = Var' (i - 1) bracket (SKAp' l r) = SKAp' (SKAp' S' (bracket l)) (bracket r) bracket x = xIf we’re at Var 0 we replace the variable with the term s k k. This has the property that (s k k) A = A. It’s traditional to abbreviate s k k as i (leading to the name SKI calculus) but i is strictly unnecessary as we can see.

If we’re at an application, we do something really clever. We have two terms which both have a free variable, so we bracket them and use S to supply the free variable to both of them! Remember that

s (bracket A) (bracket B) C = ((bracket A) C) ((bracket B) C)which is exactly what we require by the specification of bracket.

Now that we have a way to remove free variables from an SKH term, we can close off a term with no free variables to give back a normal SK term.

close :: SKH -> SK close (Var' _) = error "Not closed" close S' = S close K' = K close (SKAp' l r) = SKAp (close l) (close r)Now our translator can be written nicely.

l2h :: Lam -> SKH l2h (Var i) = Var' i l2h (Ap l r) = SKAp' (l2h l) (l2h r) l2h (Lam h) = bracket (l2h h) translate :: Lam -> SK translate = close . l2hl2h is the main worker in this function. It works across SKH’s because it needs to deal with open terms during the translation. However, during the process we repeatedly call bracket so every time we go under a binder we call bracket afterwards, removing the free variable we just introduced.

This means that if we call l2h on a closed lambda term we get back a closed SKH term. This justifies using close after the toplevel call to l2h in translate which wraps up our conversion.

For funsies I decided to translate the Y combinator and got back this mess

(s ((s ((s s) ((s k) k))) ((s ((s s) ((s ((s s) k)) k))) ((s ((s s) k)) k)))) ((s ((s s) ((s k) k))) ((s ((s s) ((s ((s s) k)) k))) ((s ((s s) k)) k)))Completely useless, but kinda fun to look at. More interestingly, the canonical nonterminating lambda term is λx. x x which gives back s i i, much more readable.

Wrap UpNow that we’ve performed this translation we have a very nice proof of the turing completeness of SK calculus. This has some nice upshots, folks who study things like realizability models of constructive logics use Partial Combinatory Algebras a model of computation. This is essentially an algebraic model of SK calculus.

If nothing else, it’s really quite crazy that such a small language is possible of simulating any computable function across numbers.

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus### Is it acceptable if Applicative behave not like aMonad

### FP Complete: Update on GHC 7.10 in Stackage

It seems that every time a blog post about Stackage comes out, someone comments on Reddit how excited they are for Stackage to be ready for GHC 7.10. Unfortunately, there's still an open issue about packages that have incorrect bounds or won't compile.

Well, as about 30 different package authors probably figured out today, I
decided to take a crack at compiling Stackage with 7.10. This involved changing
around some bounds, sending some pull requests, setting some expected test
failures, and unfortunately temporarily removing a few packages from the build.
But the result is well worth it: **we now have a working build of Stackage with
GHC 7.10**!

To give some more information: this snapshot has 1028 packages, compared to 1106 for GHC 7.8. When updating the build-constraints.yaml file, I added the phrase "GHC 7.10" next to every modification I made. I encourage people to take a look at the file and see if there are any projects you'd like to send a pull request to and add GHC 7.10 support. If you do so, please ping me once the change is on Hackage so I can add it back to Stackage.

The question is: what do we do now? I'm interested in other opinions, but my recommendation is:

- Early next week, I switch over the official nightly builds to start using GHC 7.10. LTS 2 will continue running, and will use GHC 7.8 as it does already. (LTS releases never change GHC major version.)
- We work on improving the package and test coverage for GHC 7.10.
- No earlier than July 1 do we release LTS 3, which will support GHC 7.10. If there is concern that the release isn't ready yet, we can hold off an extra month (though I doubt that will be necessary).

To ease the burden on package maintainers, LTS support cycles do not overlap. LTS 2 will be supported for a minimum of 3 months from its initial release (April 1, 2015), which is why LTS 3 will be released no sooner than July 1, 2015.

I'm quite surprised and excited that Stackage was able to move to GHC 7.10 so quickly. Thank you to package authors for updated your code so quickly!

### Douglas M. Auclair (geophf): April 2015 1HaskellADay Problems and Solutions

**April 2015**

- April 30th, 2015: "SHOW ME DA MONAY!" http://lpaste.net/3352992723589136384 for today's #haskell problem <iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/1-mOKMq19zU/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/1-mOKMq19zU?feature=player_embedded" width="320"></iframe>Simple? Sure! Solution? Yes. http://lpaste.net/7331259237240143872
- April 29th, 2015: We take stock of the Stochastic Oscillator http://lpaste.net/8447434917217828864 for today's #haskell problem #trading We are so partially stoched for a partial solution for the Stochastic Oscillator http://lpaste.net/4307607333212520448
- April 28th, 2015: Today's #haskell puzzle as a ken-ken solver http://lpaste.net/6211501623257071616 a solution (beyond my ... ken) is defined at http://lpaste.net/929006498481176576
- April 27th, 2015: Rainy days and Mondays do not stop the mail, nor today's #haskell problem! http://lpaste.net/6468251516921708544 The solution posted at http://lpaste.net/6973841984536444928 … shows us view-patterns and how to spell the word 'intercalate'.
- April 24th, 2015: Bidirectionally (map) yours! for today's #haskell problem http://lpaste.net/1645129197724631040 A solution to this problem is posted at http://lpaste.net/540860373977268224
- April 23rd, 2015: Today's #haskell problem looks impossible! http://lpaste.net/6861042906254278656 So this looks like this is a job for ... KIM POSSIBLE! YAY! @sheshanaag offers a solution at http://lpaste.net/131309 .
- April 22nd, 2015: "I need tea." #BritishProblems "I need clean data" #EveryonesPipeDream "Deletia" today's #haskell problem http://lpaste.net/2343021306984792064 Deletia solution? Solution deleted? Here ya go! http://lpaste.net/5973874852434542592
- April 21st, 2015: In which we learn about Tag-categories, and then Levenshtein distances between them http://lpaste.net/2118427670256549888 for today's #haskell problem Okay, wait: is it a group of categories or a category of groups? me confused! A solution to today's #haskell at http://lpaste.net/8855539857825464320
- April 20th, 2015: Today we can't see the forest for the trees, so let's change that http://lpaste.net/3949027037724803072 A solution to our first day in the tag-forest http://lpaste.net/4634897048192155648 ... make sure you're marking your trail with breadcrumbs!
- April 17th, 2015: No. Wait. You wanted line breaks with that, too? Well, why didn't you say so in the first place? http://lpaste.net/8638783844922687488 Have some curry with a line-breaky solution at http://lpaste.net/8752969226978852864
- April 16th, 2015: "more then." #okaythen Sry, not sry, but here's today's #haskell problem: http://lpaste.net/6680706931826360320 I can't even. lolz. rofl. lmao. whatevs. And a big-ole-blob-o-words is given as the solution http://lpaste.net/2810223588836114432 for today's #haskell problem. It ain't pretty, but... there it is
- April 15th, 2015: Poseidon's trident or Andrew's Pitchfork analysis, if you prefer, for today's #haskell problem http://lpaste.net/5072355173985157120
- April 14th, 2015: Refining the SMA-trend-ride http://lpaste.net/3856617311658049536 for today's #haskell problem. Trending and throttling doesn't ... quite get us there, but ... solution: http://lpaste.net/9223292936442085376
- April 13th, 2015: In today's #haskell problem we learn zombies are comonadic, and like eating SMA-brains. http://lpaste.net/8924989388807471104 Yeah. That. Hold the zombies, please! (Or: when $40k net profit is not enough by half!) http://lpaste.net/955577567060951040
- April 10th, 2015: Today's #haskell problem delivered with much GRAVITAS, boils down to: don't be a dumb@$$ when investing http://lpaste.net/5255378926062010368 #KeepinItReal The SMA-advisor is REALLY chatty, but how good is it? TBD, but here's a very simple advisor: http://lpaste.net/109712 Backtesting for this strategy is posted at http://lpaste.net/109687 (or: how a not so good buy/sell strategy give you not so good results!)
- April 9th, 2015: A bit of analysis of historical stock data http://lpaste.net/6960188425236381696 for today's #haskell problem A solution to the SMA-analyses part is posted at http://lpaste.net/3427480809555099648
- April 8th, 2015: MOAR! MOAR! You clamor for MOAR real-world #haskell problems, and how can I say no? http://lpaste.net/5198207211930648576 Downloading stock screens Hint: get the screens from a web service; look at, e.g.: https://code.google.com/p/yahoo-finance-managed/wiki/YahooFinanceAPIs A 'foldrM'-solution to this problem is posted at http://lpaste.net/2729747257602605056
- April 7th, 2015: Looking at a bit of real-world #haskell for today's stock (kinda-)screen-scraping problem at http://lpaste.net/5737110678548774912 Hint: perhaps you'd like to solve this problem using tagsoup? https://hackage.haskell.org/package/tagsoup *GASP* You mean ... it actually ... works? http://lpaste.net/1209131365107236864 A MonadWriter-y tagsoup-y Monoidial-MultiMap-y solution
- April 6th, 2015: What do three men teaching all of high school make, beside today's #haskell problem? http://lpaste.net/667230964799242240 Tired men, of course! Thanks, George Boole! Three Men and a High School, SOLVED! http://lpaste.net/7942804585247145984
- April 3rd, 2015: reverseR that list like a Viking! Rrrrr! for today's problem http://lpaste.net/8513906085948555264 … #haskell Totes cheated to get you the solution http://lpaste.net/1880031563417124864 used a library that I wrote, so, like, yeah, totes cheated! ;)
- April 2nd, 2015: We're breaking new ground for today's #haskell problem: let's reverse lists... relationally. And tag-type some values http://lpaste.net/389291192849793024 After several fits and starts @geophf learns how to reverse a list... relationally http://lpaste.net/7875722904095162368 and can count to the nr 5, as well
- April 1st, 2015: Take a drink of today's #haskell problem: love potion nr9 http://lpaste.net/435384893539614720 because, after all: all we need is love, la-di-dah-di-da! A solution can be found au shaque d'amour posted at http://lpaste.net/6859866252718899200

### Prof. Hudak passed away last night

This really saddens me. I don't have much to say but that he is a great man and he has inspired me a lot. Let's hold him and his family in light.

submitted by enzozhc[link] [17 comments]

### Brandon Simmons: Announcing hashabler: like hashable only more so

I’ve just released the first version of a haskell library for principled, cross-platform & extensible hashing of types, which includes an implementation of the FNV-1a algorithm. It is available on hackage, and can be installed with:

cabal install hashablerhashabler is a rewrite of the hashable library by Milan Straka and Johan Tibell, having the following goals:

Extensibility; it should be easy to implement a new hashing algorithm on any Hashable type, for instance if one needed more hash bits

Honest hashing of values, and principled hashing of algebraic data types (see e.g. #30)

Cross-platform consistent hash values, with a versioning guarantee. Where possible we ensure morally identical data hashes to indentical values regardless of processor word size and endianness.

Make implementing identical hash routines in other languages as painless as possible. We provide an implementation of a simple hashing algorithm (FNV-1a) and make an effort define Hashable instances in a way that is well-documented and sensible, so that e.g. one can (hopefully) easily implement string hashing routine in JavaScript that will match the way we hash strings here.

I started writing a fast concurrent bloom filter variant, but found none of the existing libraries fit my needs. In particular hashable was deficient in a number of ways:

The number of hash bits my data structure requires can vary based on user parameters, and possibly be more than the 64-bits supported by hashable

Users might like to serialize their bloomfilter and store it, pass it to other machines, or work with it in a different language, so we need

- hash values that are consistent across platforms
- some guarantee of consistency across library versions

I was also very concerned about the general approach taken for algebraic types, which results in collision, the use of “hashing” numeric values to themselves, dubious combining functions, etc. It wasn’t at all clear to me how to ensure my data structure wouldn’t be broken if I used hashable. See below for a very brief investigation into hash goodness of the two libraries.

There isn’t interest in supporting my use case or addressing these issues in hashable (see e.g. #73, #30, and #74) and apparently hashable is working in practice for people, but maybe this new package will be useful for some other folks.

Hash goodness of hashable and hashabler, brieflyHashing-based data structures assume some “goodness” of the underlying hash function, and may depend on the goodness of the hash function in ways that aren’t always clear or well-understood. “Goodness” also seems to be somewhat subjective, but can be expressed statistically in terms of bit-independence tests, and avalanche properties, etc.; various things that e.g. smhasher looks at.

I thought for fun I’d visualize some distributions, as that’s easier for my puny brain to understand than statistics. We visualize 32-bit hashes by quantizing by 64x64 and mapping that to a pixel following a hilbert curve to maintain locality of hash values. Then when multiple hash values fall within the same 64x64 pixel, we darken the pixel, and finally mark it red if we can’t go any further to indicate clipping.

It’s easy to cherry-pick inputs that will result in some bad behavior by hashable, but below I’ve tried to show some fairly realistic examples of strange or less-good distributions in hashable. I haven’t analysed these at all. Images are cropped ¼ size, but are representative of the whole 32-bit range.

First, here’s a hash of all [Ordering] of size 10 (~59K distinct values):

Hashabler:

Hashable:

Next here’s the hash of one million (Word8,Word8,Word8) (having a domain ~ 16 mil):

Hashabler:

Hashable:

I saw no difference when hashing english words, which is good news as that’s probably a very common use-case.

Please helpIf you could test the library on a big endian machine and let me know how it goes, that would be great. See here.

You can also check out the **TODO**s scattered throughout the code and send
pull requests. I mayb not be able to get to them until June, but will be very
grateful!

I’m always open to interesting work or just hearing about how companies are using haskell. Feel free to send me an email at brandon.m.simmons@gmail.com

### Jan Stolarek: Smarter conditionals with dependent types: a quick case study

Find the type error in the following Haskell expression:

if null xs then tail xs else xsYou can’t, of course: this program is obviously nonsense unless you’re a typechecker. The trouble is that only certain computations make sense if the null xs test is True, whilst others make sense if it is False. However, as far as the type system is concerned, the type of the then branch is the type of the else branch is the type of the entire conditional. Statically, the test is irrelevant. Which is odd, because if the test really were irrelevant, we wouldn’t do it. Of course, tail [] doesn’t go wrong – well-typed programs don’t go wrong – so we’d better pick a different word for the way they do go.

The above quote is an opening paragraph of Conor McBride’s “Epigram: Practical Programming with Dependent Types” paper. As always, Conor makes a good point – this test is completely irrelevant for the typechecker although it is very relevant at run time. Clearly the type system fails to accurately approximate runtime behaviour of our program. In this short post I will show how to fix this in Haskell using dependent types.

The problem is that the types used in this short program carry no information about the manipulated data. This is true both for Bool returned by null xs, which contains no evidence of the result, as well as lists, that store no information about their length. As some of you probably realize the latter is easily fixed by using vectors, ie. length-indexed lists:

data N = Z | S N -- natural numbers data Vec a (n :: N) where Nil :: Vec a Z Cons :: a -> Vec a n -> Vec a (S n)The type of vector encodes its length, which means that the type checker can now be aware whether it is dealing with an empty vector. Now let’s write null and tail functions that work on vectors:

vecNull :: Vec a n -> Bool vecNull Nil = True vecNull (Cons _ _) = False vecTail :: Vec a (S n) -> Vec a n vecTail (Cons _ tl) = tlvecNull is nothing surprising – it returns True for empty vector and False for non-empty one. But the tail function for vectors differs from its implementation for lists. tail from Haskell’s standard prelude is not defined for an empty list so calling tail [] results in an exception (that would be the case in Conor’s example). But the type signature of vecTail requires that input vector is non-empty. As a result we can rule out the Nil case. That also means that Conor’s example will no longer typecheck1. But how can we write a correct version of this example, one that removes first element of a vector only when it is non-empty? Here’s an attempt:

shorten :: Vec a n -> Vec a m shorten xs = case vecNull xs of True -> xs False -> vecTail xsThat however won’t compile: now that we written type-safe tail function typechecker requires a proof that vector passed to it as an argument is non empty. The weak link in this code is the vecNull function. It tests whether a vector is empty but delivers no type-level proof of the result. In other words we need:

vecNull` :: Vec a n -> IsNull nie. a function with result type carrying the information about the length of the list. This data type will have the runtime representation isomorphic to Bool, ie. it will be an enumeration with two constructors, and the type index will correspond to length of a vector:

data IsNull (n :: N) where Null :: IsNull Z NotNull :: IsNull (S n)Null represents empty vectors, NotNull represents non-empty ones. We can now implement a version of vecNull that carries proof of the result at the type level:

vecNull` :: Vec a n -> IsNull n vecNull` Nil = Null vecNull` (Cons _ _) = NotNullThe type signature of vecNull` says that the return type must have the same index as the input vector. Pattern matching on the Nil case provides the type checker with the information that the n index of Vec is Z. This means that the return value in this case must be Null – the NotNull constructor is indexed with S and that obviously does not match Z. Similarly in the Cons case the return value must be NotNull. However, replacing vecNull in the definition of shorten with our new vecNull` will again result in a type error. The problem comes from the type signature of shorten:

shorten :: Vec a n -> Vec a mBy indexing input and output vectors with different length indices – n and m – we tell the typechecker that these are completely unrelated. But that is not true! Knowing the input length n we know exactly what the result should be: if the input vector is empty the result vector is also empty; if the input vector is not empty it should be shortened by one. Since we need to express this at the type level we will use a type family:

type family Pred (n :: N) :: N where Pred Z = Z Pred (S n) = n(In a fully-fledged dependently-typed language we would write normal function and then apply it at the type level.) Now we can finally write:

shorten :: Vec a n -> Vec a (Pred n) shorten xs = case vecNull` xs of Null -> xs NotNull -> vecTail xsThis definition should not go wrong. Trying to swap expression in the branches will result in a type error.

- Assuming we don’t abuse Haskell’s unsoundness as logic, eg. by using undefined.

### What are some good examples of Haskell services/daemons?

Hi.

What are some good examples of open-source Haskell services/daemons that have been set up to work with modern init systems like systemd/upstart? I am hoping to find a good role model for the service I am working on making. Particularly, I am interested in how it handles configuration files, reloading configuration files when getting a HUP signal, and error logging.

Ryan

submitted by ryantm[link] [9 comments]