News aggregator

Using LDAP on Windows?

Haskell on Reddit - Tue, 08/18/2015 - 5:37pm

EDIT Thanks to /u/wrvn for recommending MSYS2. I was able to get everything installed smoothly.


A couple years ago I tried to use this LDAP package on Windows, but couldn't seem to get it working and eventually ended up writing a wrapper around a Python implementation. Now I'm revisiting the problem for a new project and still cannot seem to get things to work.


Does anyone perhaps have a tested methodology for setting this up?


I tried a few different routes and got the furthest with OpenLDAP through Cygwin (the lber and ldap libraries at least appear to be recognized), but now am getting linker errors:

cabal install LDAP --extra-lib-dirs=c:/cygwin64/lib --extra-include-dirs=c:/cygwin64/usr/include dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x53): undefined reference to `__ctype_ptr__' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x97): undefined reference to `__ctype_ptr__' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0xd5): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0xf4): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x102): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x114): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x127): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x13c): more undefined references to `_impure_ptr' follow dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x174): undefined reference to `__swbuf_r' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x17d): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x188): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x196): undefined reference to `__swbuf_r' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x1a1): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x1b4): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x201): undefined reference to `_impure_ptr' c:/program files/haskell platform/7.10.2-a/mingw/bin/../lib/gcc/x86_64-w64-mingw32/4.6.3/../../../../x86_64-w64-mingw32/bin/ld.exe: dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o: bad reloc address 0x0 in section `.pdata' c:/program files/haskell platform/7.10.2-a/mingw/bin/../lib/gcc/x86_64-w64-mingw32/4.6.3/../../../../x86_64-w64-mingw32/bin/ld.exe: final link failed: Invalid operation collect2: ld returned 1 exit status linking dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_make.o failed (exit code 1) command was: C:\Program Files\Haskell Platform\7.10.2-a\mingw\bin\gcc.exe dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_make.o dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o -o dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_make.exe -Lc:/cygwin64/lib -lldap -llber -LC:\Program Files\Haskell Platform\7.10.2-a\lib\base_GDytRqRVSUX7zckgKqJjgw -lwsock32 -luser32 -lshell32 -LC:\Program Files\Haskell Platform\7.10.2-a\lib\integ_2aU3IZNMF9a7mQ0OzsZ0dS -LC:\Program Files\Haskell Platform\7.10.2-a\lib\ghcpr_8TmvWUcS1U1IKHT0levwg3 -LC:\Program Files\Haskell Platform\7.10.2-a\lib/rts -lm -lwsock32 -lgdi32 -lwinmm cabal: Error: some packages failed to install: LDAP-0.6.10 failed during the building phase. The exception was: ExitFailure 1


I also posted this on Stack Overflow a few days ago but didn't get any responses: How to install Haskell LDAP on Windows?

submitted by i110gical
[link] [7 comments]
Categories: Incoming News

Functional Jobs: Haskell Engineer at Wagon (Full-time)

Planet Haskell - Tue, 08/18/2015 - 3:38pm

We’re a team of functional programmers writing apps and services in Haskell (and Javascript). Yes, it’s true: Haskell is our main backend language. We also use functional programming practices across our stack.

Wagon is a great place to do your best work. We love to teach and learn functional programming; our team is humble, hard working, and fun. We speak at the Bay Area Haskell Meetup, contribute to open source, and have weekly lunches with interesting people from the community.

Work on challenging engineering problems at Wagon. How to integrate Haskell with modern client- and server-side technologies, like Electron and Docker? How to deploy and manage distributed systems built with Haskell? Which pieces of our infrastructure should we open-source?

Learn more about our stack, how we combine Haskell, React, and Electron, and what it’s like working at a Haskell-powered startup.

  • love of functional programming
  • personal project or production experience using Haskell, OCaml, Clojure, or Scala
  • passionate (but practical) about software architecture
  • interested in data processing, scaling, and performance challenges
  • experience with databases (optional)
  • write Haskell for client- and server-side applications
  • integrate Haskell with modern tools like Docker, AWS, and Electron
  • architect Wagon to work with analytic databases like Redshift, BigQuery, Spark, etc
  • build systems and abstractions for streaming data processing and numeric computing
  • work with libraries like Conduit, Warp, and Aeson
  • use testing frameworks like QuickCheck and HSpec
  • develop deployment and monitoring tools for distributed Haskell systems

Get information on how to apply for this position.

Categories: Offsite Blogs

Eric Lippert's Sharp Regrets

Lambda the Ultimate - Tue, 08/18/2015 - 1:27pm

In an article for InformIT, Eric Lippert runs down his "bottom 10" C# language design decisions:

When I was on the C# design team, several times a year we would have "meet the team" events at conferences, where we would take questions from C# enthusiasts. Probably the most common question we consistently got was "Are there any language design decisions that you now regret?" and my answer is "Good heavens, yes!"

This article presents my "bottom 10" list of features in C# that I wish had been designed differently, with the lessons we can learn about language design from each decision.

The "lessons learned in retrospect" for each one are nicely done.

Categories: Offsite Discussion

Tutorial on using stack for absolute Haskell beginners?

Haskell on Reddit - Tue, 08/18/2015 - 1:07pm

I'm wondering if someone has already written up a tutorial on using stack for the absolute Haskell beginner. Ideally it would explain:

  • how to get stack
  • how to start a new project
  • how to edit the cabal file to add dependencies
  • how to invoke ghci
  • tips on the debug-edit-compile cycle
  • where to find your compiled binary

Also - it should assume nothing about the user's configuration, meaning that they might have GHC and packages installed but maybe it's messed up in some way.


submitted by mn-haskell-guy
[link] [25 comments]
Categories: Incoming News

I am an Electrical Engineer with limited background in software. Should I learn Haskell?

Haskell on Reddit - Tue, 08/18/2015 - 1:04pm

A new course will start in the upcoming weeks. I was wondering if it could be useful for me to know. I am just a couple of years out of school so I'm not really "locked" into any certain area yet. There seems to be an abundance of jobs in software so maybe learning Haskell is a good place to start?

I am familiar with C, C++, Matlab but havent' done anything with them besides school assignments.


submitted by gayweatherthrow
[link] [13 comments]
Categories: Incoming News

Using cachegrind with Haskell programs.

Haskell on Reddit - Tue, 08/18/2015 - 11:45am

Since Haskell programs are compiled I tried running cachegrind on the executable and was very surprised at the results. The cache miss rate for L3 was a <0.1% and I thought this was odd since the program does a lot of numerical computation using lists and I expected the miss rate to be higher. I am wondering if there are any interactions between the runtime that might throw the results off? Also, would very low cache miss rate (and tiny gc time) indicate that the performance to be gained from using arrays instead of lists be very low.

submitted by jura__
[link] [4 comments]
Categories: Incoming News

A different take on Foldable?

Haskell on Reddit - Tue, 08/18/2015 - 9:49am

I think Haskell's organization of list algorithms very confusing. Some functions are right on Prelude, some need Data.List, some need their own import such as chunksOf on Data.List.Split. Moreover, there is way too much repetition across Hackage. The very same algorithm (not definition) for sum, for example, is on Prelude, Data.List, Data.Vector and Data.Foldable. The organization looks arbitrary, redundant and confusing. Since Foldable is essentially a way to write list algorithms generically, then why don't we just get rid of every "list algorithm" on those libs and accumulate everything on Data.Foldable?

Well, of course, Foldable is not powerful enough to write them all. We can write sum and length, but not filter, for example. The issue is that it doesn't allow us to build the original type back. That is, we could actually write filter for any Foldable - but it could only return a List with the filtered elements, not the original type. Yet, how many structures you can think of, for which you could extract a list of filtered elements, but couldn't filter the structure itself? It seems that in most cases Foldable gives us half of an existing power. That's why I propose a different view. First, notice what happens when we apply foldr to free vars and an arbitrary value:

foldr c n anything == (\ c n -> (c ... (c ... (c ... n))))

We get a church-encoded list. No wonder why foldr is the encapsulation of list algorithms - it is nothing but a recipe for transforming something to a church list. My suggestion is that, instead of writing list algorithms for specific types such as List/Text, or for Foldable, we just do it directly for the fold. Something like that:

-- The type of (\ c n -> (c ... (c ... (c ... n)))) type Fold h = forall t . (h -> t -> t) -> t -> t head :: Fold a -> a nil :: h -> t -> t cons :: a -> Fold a -> Fold a tail :: Fold a -> Fold a reverse :: Fold a -> Fold a map :: (a -> b) -> Fold a -> Fold b sum :: (Num a) => Fold a -> a filter :: (a -> Bool) -> Fold a -> Fold a length :: Fold a -> Int zipWith :: (a -> b -> c) -> Fold a -> Fold b -> Fold c foldr cons nil fold = fold cons nil foldl cons nil fold = foldr (\ h t accum -> (t (cons accum h))) id fold nil head fold = fold (\ h t -> h) undefined nil = \ cons nil -> nil cons head fold = \ cons nil -> cons head (fold cons nil) tail fold = \ cons nil -> fold (\ h t g -> (g h (t cons))) (const nil) (\ h t -> t) reverse fold = \ cons nil -> foldl (flip cons) nil fold map fn = \ list cons -> list (cons . fn) sum fold = fold (+) 0 filter cond fold = \ cons nil -> fold (\ h t -> if cond h then cons h t else t) nil length fold = fold (const (+ 1)) 0 zipWith fn a b cons nil = (left # a) # ((right # fn) # b) where left = foldr (\ x xs cont -> (cont x xs)) (const nil) right = \ fn -> (foldr (\ y ys x cont -> (cons (fn x y) (cont ys))) (const (const nil))) (#) = unsafeCoerce -- :( see

That way, we can just put all list algorithms in a single place, making it easier to remember and organize them. We can, then, specialize those to specific types by just using a typeclass:

-- A type class for "things on which we can apply list algorithms on" class Foldable l a where fromFold :: Fold a -> l a toFold :: l a -> Fold a -- Generic list functions fmap f = fromFold . map f . toFold ffilter cond = fromFold . filter cond . toFold fzipWith fn a b = fromFold (zipWith fn (toFold a) (toFold b)) ... etc ...

Under this view, Foldable is just a recipe on how to convert from a type to a fold and back, so we can use list algorithms on it. And what is good about it is that writing a Foldable instance gives us all list algorithms - not just a very small subset, such as what the "fold as a summary" (whatever that is) view gives us. Of course, there is a point to make about performance. See this generic list-algorithm that multiplies all elements by 3, filters the odd ones, and then multiplies the remaining elements by 2:

foo :: (Num a) => Foldable a -> Foldable a foo container = fmap (* 2) . ffilter odd . fmap (* 3) $ container

That is expanded to:

foo container = fromFold . map (* 2) . toFold . fromFold . filter odd . toFold . fromFold . map (* 3) . toFold $ container

Which creates 2 intermediate structures and executes 3 O(N) operations. That would be horrible, but two very neat things will happen. First, since we know that toFold . fromFold == id, we can make a rewrite rule that eliminates that. This way, foo becomes:

foo container = fromFold . map (* 2) . filter odd . map (* 3) . toFold $ container

Which eliminates the intermediate structures, getting us to the same performance we would expect if we programmed the functions for the original structure. Now, since none of our list fold functions is recursive, GHC can inline those at will. By just inlining the middle part - map (* 2) . filter odd . map (* 3) - we get this:

foo container = fromFold . (\ fold cons nil -> (fold (\ head tail -> (if odd ((* 3) head) then (cons ((* 2) ((* 3) head)) tail) else tail)) nil)) . toFold $ container

Which, for better visualization, can be reconstructed as:

middlePart fold cons nil = fold innerLoop nil where innerLoop head tail = if odd (head * 3) then cons (head * 3 * 2) tail else tail

What we can see here is that the 3 O(N) operations got fused into a single pass, the innerLoop. Even the (* 2) and the (* 3), which came from different maps, separated by a filter, managed to get in the same place. So, in other words, just writing that Foldable instance for a list-like type doesn't only give you access to all list algorithms, but a complete fusion framework for free. An example:

instance Foldable [] where fromFold fold = fold (:) [] toFold (x : xs) cons nil = cons x (toFold xs cons nil) toFold [] cons nil = nil main = do print $ (fmap (* 2) [1,2,3] :: [Int]) print $ (fzip [1,2,3] [4,5,6] :: [(Int,Int)]) -- Could just do the same for vectors, arrays, text, queues, perhaps set?

I'm not a type theoricist so I'm not sure if this has really terrible theoretical implications, but at least from an engineering point of view it looks better. Indeed, that might be very well the case as I couldn't get the right types for most of the functions for Fold and had to use unsafeCoerce in some places - but maybe more advanced features from the type system could do it correctly? I don't know. What do you think?

submitted by SrPeixinho
[link] [37 comments]
Categories: Incoming News

Understanding the State monad

Haskell on Reddit - Tue, 08/18/2015 - 3:32am

I am just trying to get to understand haskell and I am stuck at State monad. First of all I am confused where it is defined (all the other monads I know about IO, Maybe, List, Functions seem to be easily accessible to me). Secondly I wish to write a function that reads input (using IO) and updates some inner state and once in a while prints the state. The infinite loop would probably have to be recursion of main :: IO (). Can anybody please (pretty pretty please) sketch such a function for me, so that I can analyze it. If yes, please use standard haskell types/functions so that I can trace them in prelude and stuff, rather than custom-made code (where possible). Thank you.

submitted by jd823592
[link] [22 comments]
Categories: Incoming News

Looking for a library for parsing the "aeson" Value

Haskell on Reddit - Mon, 08/17/2015 - 10:44pm

IMO there are ways to improve over the Parser API of the "aeson" library. Do there exist any alternatives for parsing Value into Haskell data structures?

submitted by nikita-volkov
[link] [7 comments]
Categories: Incoming News

mightybyte: "cabal gen-bounds": easy generation of dependency version bounds

Planet Haskell - Mon, 08/17/2015 - 10:40pm

In my last post I showed how release dates are not a good way of inferring version bounds. The package repository should not make assumptions about what versions you have tested against. You need to tell it. But from what I've seen there are two problems with specifying version bounds:

  1. Lack of knowledge about how to specify proper bounds
  2. Unwillingness to take the time to do so

Early in my Haskell days, the first time I wrote a cabal file I distinctly remember getting to the dependencies section and having no idea what to put for the version bounds. So I just ignored them and moved on. The result of that decision is that I can no longer build that app today. I would really like to, but it's just not worth the effort to try.

It wasn't until much later that I learned about the PVP and how to properly let bounds. But even then, there was still an obstacle. It can take some time to add appropriate version bounds to all of a package's dependencies. So even if you know the correct scheme to use, you might not want to take the time to do it.

Both of these problems are surmountable. And in the spirit of doing that, I would like to propose a "cabal gen-bounds" command. It would check all dependencies to see which ones are missing upper bounds and output correct bounds for them. I have implemented this feature and it is available at Here is what it looks like to use this command on the cabal-install package:

$ cabal gen-bounds Resolving dependencies... The following packages need bounds and here is a suggested starting point. You can copy and paste this into the build-depends section in your .cabal file and it should work (with the appropriate removal of commas). Note that version bounds are a statement that you've successfully built and tested your package and expect it to work with any of the specified package versions (PROVIDED that those packages continue to conform with the PVP). Therefore, the version bounds generated here are the most conservative based on the versions that you are currently building with. If you know your package will work with versions outside the ranges generated here, feel free to widen them. network >= 2.6.2 && < 2.7, network-uri >= 2.6.0 && < 2.7,

The user can then paste these lines into their build-depends file. They are formatted in a way that facilitates easy editing as the user finds more versions (either newer or older) that the package builds with. This serves to both educate users and automate the process. I think this removes one of the main frustrations people have about upper bounds and is a step in the right direction of getting more hackage packages to supply them. Hopefully it will be merged upstream and be available in cabal-install in the future.

Categories: Offsite Blogs

Thiago Negri: Dunning-Kruger effect on effort estimates

Planet Haskell - Mon, 08/17/2015 - 7:23pm
This post has two parts. The first is an experiment with a poll. The second is the actual content with my thoughts.

The experiment and the poll comes first as I don't want to infect you with my idea before you answer the questions. If you are in the mood of reading a short story and answering a couple of questions, keep reading. In case you are only concerned with my ideas, you may skip the first part.

I won't give any discussion about the subject. I'm just throwing my ideas to the internet, be warned.

Part 1. The experiment
You have to estimate the effort needed to complete a particular task of software development. You may use any tool you'd like to do it, but you will only get as much information as I will tell you now. You will use all the technologies that you already know, so you won't have any learning curve overhead and you will not encounter any technical difficulty when doing the task.

Our customer is bothered by missing other co-workers birthdates. He wants to know all co-workers that are cellebrating birthday or just cellebrated, so he can send a "happy birthday" message at the very morning, when he just turned on his computer. To avoid sending duplicated messages, he doesn't want to see the same person on multiple days at the list.

Your current sofware system already have all workers of the company with birthdates and their relationship, so you can figure out pretty easily who are the co-workers of the user and when is the birthdate of everyone.

Now, stop reading further, take your time and estimate the effort of this task by answering the following poll.

<script charset="utf-8" src="" type="text/javascript"></script>
<noscript>Estimate your effort</noscript>

Okay, now I'll give you more information about it and ask for your estimate again.

Some religions do not celebrate birthdates and some people get really mad when receiving a message of "happy birthday". To avoid this, you also need to check if the user wants to make its birthdate public.

By the way, the customer's company closes at the weekend, so you need to take into account that at monday you will need to show birthdates that happened at the weekend and not only of the current day.

This also applies to holidays. The holidays are a bit harder as it depends on the city of the employee, as they may have different holidays.

Oh, and don't forget to take into account that the user may have missed a day, so it needs to see everyone that he would on the day that he missed the job.

Now, take your time and estimate again.

<script charset="utf-8" src="" type="text/javascript"></script>
<noscript>Estimate your effort - II</noscript>

Part 2. The Dunning-Kruger effect on estimates
I don't know if the little story above tricked you or not, but that same story tricked me in real-life. :)

The Dunning-Kruger effect is stated at Wikipedia as:

"[...] a cognitive bias wherein relatively unskilled individuals suffer from illusory superiority, mistakenly assessing their ability to be much higher than is accurate. This bias is attributed to a metacognitive inability of the unskilled to accurately evaluate their own ability level. Conversely, highly skilled individuals may underestimate their relative competence, erroneously assuming that tasks that are easy for them are also easy for others."

I'm seeing that this effect contributes to make the task of estimating effort to be completely innacurate by nature, as it always pulls to a bad outcome. If you know little about it, you will overestimate your knowledge and consequently underestimate the effort to accomplish it. If you know much, you will underestimate your knowledge and consequently overestimate the effort.

I guess one way to minimize this problem is to remove knowledge up to the point that you only have left the essential needed to complete the task. Sort of what Taleb calls "via negativa" in his Antifragile book.

What do you think? Does this makes any sense to you?
Categories: Offsite Blogs

Kill forked threads in ghci

haskell-cafe - Mon, 08/17/2015 - 6:14pm
Is there a way in ghci to kill all running background processes without quitting ghci itself? E.g. if I do > forkIO . forever $ print 5 >> threadDelay 1000000 If I don't have the ThreadId, is there any way for me to stop printing "5"s without killing ghci? tom
Categories: Offsite Discussion

LPNMR 2015 - Call for participation

General haskell list - Mon, 08/17/2015 - 4:41pm
[apologies for possible multiple copies] Call for Participation --------------------------------------------------------------------------- 13th International Conference on Logic Programming and Non-monotonic Reasoning LPNMR 2015 Lexington, KY, USA September 27-30, 2015 (Collocated with the 4th Conference on Algorithmic Decision Theory 2015) --------------------------------------------------------------------------- REGISTRATION Registration procedure is available via Early registration closes before the end of July. AIMS AND SCOPE LPNMR 2015 is the thirteenth in the series of international meetings on logic programming and non-monotonic reasoning. LPNMR is a forum for exchanging ideas on declarative logic programming, non-monotonic reasoning, and
Categories: Incoming News

STABILIZER : Statistically Sound Performance Evaluation

Lambda the Ultimate - Mon, 08/17/2015 - 2:45pm

My colleague Mike Rainey described this paper as one of the nicest he's read in a while.

STABILIZER : Statistically Sound Performance Evaluation
Charlie Curtsinger, Emery D. Berger

Researchers and software developers require effective performance evaluation. Researchers must evaluate optimizations or measure overhead. Software developers use automatic performance regression tests to discover when changes improve or degrade performance. The standard methodology is to compare execution times before and after applying changes.

Unfortunately, modern architectural features make this approach unsound. Statistically sound evaluation requires multiple samples to test whether one can or cannot (with high confidence) reject the null hypothesis that results are the same before and after. However, caches and branch predictors make performance dependent on machine-specific parameters and the exact layout of code, stack frames, and heap objects. A single binary constitutes just one sample from the space of program layouts, regardless of the number of runs. Since compiler optimizations and code changes also alter layout, it is currently impossible to distinguish the impact of an optimization from that of its layout effects.

This paper presents STABILIZER, a system that enables the use of the powerful statistical techniques required for sound performance evaluation on modern architectures. STABILIZER forces executions to sample the space of memory configurations by repeatedly re-randomizing layouts of code, stack, and heap objects at runtime. STABILIZER thus makes it possible to control for layout effects. Re-randomization also ensures that layout effects follow a Gaussian distribution, enabling the use of statistical tests like ANOVA. We demonstrate STABILIZER's efficiency (< 7% median overhead) and its effectiveness by evaluating the impact of LLVM’s optimizations on the SPEC CPU2006 benchmark suite. We find that, while -O2 has a significant impact relative to -O1, the performance impact of -O3 over -O2 optimizations is indistinguishable from random noise.

One take-away of the paper is the following technique for validation: they verify, empirically, that their randomization technique results in a gaussian distribution of execution time. This does not guarantee that they found all the source of measurement noise, but it guarantees that the source of noise they handled are properly randomized, and that their effect can be reasoned about rigorously using the usual tools of statisticians. Having a gaussian distribution gives you much more than just "hey, taking the average over these runs makes you resilient to {weird hardward effect blah}", it lets you compute p-values and in general use statistics.

Categories: Offsite Discussion

ETAPS 2016 call for papers

General haskell list - Mon, 08/17/2015 - 2:39pm
****************************************************************** CALL FOR PAPERS: ETAPS 2016 19th European Joint Conferences on Theory And Practice of Software Eindhoven, The Netherlands, 2-8 April 2016 ******************************************************************
Categories: Incoming News