TL;DR: take existing prototype across to Haskell for a Singaporean startup doing OKCupid for jobs, with several large corporates already signed up. Join as co-founder, not employee.
Compensation is salary plus double digit percent equity. There'll be a short trial period to make sure both sides want to work with each other. Relocation to Singapore optional but possible - remote OK.
For a variety of reasons, we are keeping the company name to ourselves until you apply, it will be included with further details in the first email back to you.
Job seekers answer questions to figure out their culture and values, and companies do the same (in practice, we go to the company and help them answer the questions).
Like OKCupid's date matching algorithm, companies and job-seekers are then matched regardless of job description (as we and our clients believe that drive and intelligence is more important than specific experience).
This both filters and expands the applicant pool for the company and speeds up the process tremendously, which they like very much (we'll send you a full list of the ones we've signed up already); and it helps applicants make sense of thousands of job openings including those they don't qualify for on paper. This might not make sense in your home market but we have had a lot of demand for it here in Asia.
Job description and compensation
A functional spec and two simple prototypes have been built already, in one of the JS frameworks. This is what existing customers are using. You will use these as a basis to implement the company "properly", including a much better clustering and matching algorithm, which is the fun part.
We provide the legal, sales, financial and admin functions; you just need to worry about building the product. The division of labour will be very clear: you are the final call on anything technical, and nobody will come micromanage your work.
Compensation will be a lowish middle class salary by Singapore standards and double digit percent equity, subject to a trial period. Salary can/will go up if/once we raise an angel round and get NRF grants, it has to be low versus what you are worth right now as the funds come from the other founder's bank account.
We are voluntarily raising little funding as we think the business can be rapidly cash flow positive (another path to a higher salary) and we want to avoid dilution - this is great news for you as well as your equity stake will remain substantial.
As you probably guessed if you've been reading these job ads, this one was influenced by the same people behind the Zalora and Capital Match Haskell team. We are very keen to use functional programming languages, especially Haskell. We are however technology agnostic ("best stack for the problem"). We have a bias towards those who prefer the relational model over NoSQL, towards those who avoid framework "magic", against Agile-type cults, and towards open source.
The CV matters less than your ability to build things, so please send us any major open source project you have authored, both a link to the repo and a "basic" description targeted at the non-technical founder.
Because the algorithm is so important to the viability of the business and the rest relatively simple, we would prefer to see some non-trivial machine learning experience but someone capable of building a Haskell web app with servant is probably going to be fine reading and applying Elements of Statistical Learning. If you claim ML experience we will ask you a few questions to check.
Please contact us at firstname.lastname@example.org by lesorciere
[link] [3 comments]
Hi. I've just configured IntelliJ IDEA to work with Haskell. The only problem is that I can't run individual scripts, I have to run the main project/module file. With Python, I am able to run single files, which makes learning that much easier. Is there any way to run individual files with IntelliJ and Haskell?
I've also found 3rd party plugin for Haskell support called haskforce. Is it any good?
Maybe other IDE suggestions?submitted by sanshinron
[link] [22 comments]
EDIT Thanks to /u/wrvn for recommending MSYS2. I was able to get everything installed smoothly.
A couple years ago I tried to use this LDAP package on Windows, but couldn't seem to get it working and eventually ended up writing a wrapper around a Python implementation. Now I'm revisiting the problem for a new project and still cannot seem to get things to work.
Does anyone perhaps have a tested methodology for setting this up?
I tried a few different routes and got the furthest with OpenLDAP through Cygwin (the lber and ldap libraries at least appear to be recognized), but now am getting linker errors:cabal install LDAP --extra-lib-dirs=c:/cygwin64/lib --extra-include-dirs=c:/cygwin64/usr/include dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x53): undefined reference to `__ctype_ptr__' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x97): undefined reference to `__ctype_ptr__' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0xd5): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0xf4): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x102): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x114): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x127): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x13c): more undefined references to `_impure_ptr' follow dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x174): undefined reference to `__swbuf_r' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x17d): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x188): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x196): undefined reference to `__swbuf_r' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x1a1): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x1b4): undefined reference to `_impure_ptr' dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o:Types_hsc_utils.c:(.text+0x201): undefined reference to `_impure_ptr' c:/program files/haskell platform/7.10.2-a/mingw/bin/../lib/gcc/x86_64-w64-mingw32/4.6.3/../../../../x86_64-w64-mingw32/bin/ld.exe: dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o: bad reloc address 0x0 in section `.pdata' c:/program files/haskell platform/7.10.2-a/mingw/bin/../lib/gcc/x86_64-w64-mingw32/4.6.3/../../../../x86_64-w64-mingw32/bin/ld.exe: final link failed: Invalid operation collect2: ld returned 1 exit status linking dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_make.o failed (exit code 1) command was: C:\Program Files\Haskell Platform\7.10.2-a\mingw\bin\gcc.exe dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_make.o dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_utils.o -o dist\dist-sandbox-8cd1684e\build\LDAP\Types_hsc_make.exe -Lc:/cygwin64/lib -lldap -llber -LC:\Program Files\Haskell Platform\7.10.2-a\lib\base_GDytRqRVSUX7zckgKqJjgw -lwsock32 -luser32 -lshell32 -LC:\Program Files\Haskell Platform\7.10.2-a\lib\integ_2aU3IZNMF9a7mQ0OzsZ0dS -LC:\Program Files\Haskell Platform\7.10.2-a\lib\ghcpr_8TmvWUcS1U1IKHT0levwg3 -LC:\Program Files\Haskell Platform\7.10.2-a\lib/rts -lm -lwsock32 -lgdi32 -lwinmm cabal: Error: some packages failed to install: LDAP-0.6.10 failed during the building phase. The exception was: ExitFailure 1
I also posted this on Stack Overflow a few days ago but didn't get any responses: How to install Haskell LDAP on Windows?submitted by i110gical
[link] [7 comments]
Wagon is a great place to do your best work. We love to teach and learn functional programming; our team is humble, hard working, and fun. We speak at the Bay Area Haskell Meetup, contribute to open source, and have weekly lunches with interesting people from the community.
Work on challenging engineering problems at Wagon. How to integrate Haskell with modern client- and server-side technologies, like Electron and Docker? How to deploy and manage distributed systems built with Haskell? Which pieces of our infrastructure should we open-source?
- love of functional programming
- personal project or production experience using Haskell, OCaml, Clojure, or Scala
- passionate (but practical) about software architecture
- interested in data processing, scaling, and performance challenges
- experience with databases (optional)
- write Haskell for client- and server-side applications
- integrate Haskell with modern tools like Docker, AWS, and Electron
- architect Wagon to work with analytic databases like Redshift, BigQuery, Spark, etc
- build systems and abstractions for streaming data processing and numeric computing
- work with libraries like Conduit, Warp, and Aeson
- use testing frameworks like QuickCheck and HSpec
- develop deployment and monitoring tools for distributed Haskell systems
Get information on how to apply for this position.
In an article for InformIT, Eric Lippert runs down his "bottom 10" C# language design decisions:
When I was on the C# design team, several times a year we would have "meet the team" events at conferences, where we would take questions from C# enthusiasts. Probably the most common question we consistently got was "Are there any language design decisions that you now regret?" and my answer is "Good heavens, yes!"
This article presents my "bottom 10" list of features in C# that I wish had been designed differently, with the lessons we can learn about language design from each decision.
The "lessons learned in retrospect" for each one are nicely done.
I'm wondering if someone has already written up a tutorial on using stack for the absolute Haskell beginner. Ideally it would explain:
- how to get stack
- how to start a new project
- how to edit the cabal file to add dependencies
- how to invoke ghci
- tips on the debug-edit-compile cycle
- where to find your compiled binary
Also - it should assume nothing about the user's configuration, meaning that they might have GHC and packages installed but maybe it's messed up in some way.
Thanks!submitted by mn-haskell-guy
[link] [25 comments]
A new course will start in the upcoming weeks. I was wondering if it could be useful for me to know. I am just a couple of years out of school so I'm not really "locked" into any certain area yet. There seems to be an abundance of jobs in software so maybe learning Haskell is a good place to start?
I am familiar with C, C++, Matlab but havent' done anything with them besides school assignments.
thankssubmitted by gayweatherthrow
[link] [13 comments]
Since Haskell programs are compiled I tried running cachegrind on the executable and was very surprised at the results. The cache miss rate for L3 was a <0.1% and I thought this was odd since the program does a lot of numerical computation using lists and I expected the miss rate to be higher. I am wondering if there are any interactions between the runtime that might throw the results off? Also, would very low cache miss rate (and tiny gc time) indicate that the performance to be gained from using arrays instead of lists be very low.submitted by jura__
[link] [4 comments]
I think Haskell's organization of list algorithms very confusing. Some functions are right on Prelude, some need Data.List, some need their own import such as chunksOf on Data.List.Split. Moreover, there is way too much repetition across Hackage. The very same algorithm (not definition) for sum, for example, is on Prelude, Data.List, Data.Vector and Data.Foldable. The organization looks arbitrary, redundant and confusing. Since Foldable is essentially a way to write list algorithms generically, then why don't we just get rid of every "list algorithm" on those libs and accumulate everything on Data.Foldable?
Well, of course, Foldable is not powerful enough to write them all. We can write sum and length, but not filter, for example. The issue is that it doesn't allow us to build the original type back. That is, we could actually write filter for any Foldable - but it could only return a List with the filtered elements, not the original type. Yet, how many structures you can think of, for which you could extract a list of filtered elements, but couldn't filter the structure itself? It seems that in most cases Foldable gives us half of an existing power. That's why I propose a different view. First, notice what happens when we apply foldr to free vars and an arbitrary value:foldr c n anything == (\ c n -> (c ... (c ... (c ... n))))
We get a church-encoded list. No wonder why foldr is the encapsulation of list algorithms - it is nothing but a recipe for transforming something to a church list. My suggestion is that, instead of writing list algorithms for specific types such as List/Text, or for Foldable, we just do it directly for the fold. Something like that:-- The type of (\ c n -> (c ... (c ... (c ... n)))) type Fold h = forall t . (h -> t -> t) -> t -> t head :: Fold a -> a nil :: h -> t -> t cons :: a -> Fold a -> Fold a tail :: Fold a -> Fold a reverse :: Fold a -> Fold a map :: (a -> b) -> Fold a -> Fold b sum :: (Num a) => Fold a -> a filter :: (a -> Bool) -> Fold a -> Fold a length :: Fold a -> Int zipWith :: (a -> b -> c) -> Fold a -> Fold b -> Fold c foldr cons nil fold = fold cons nil foldl cons nil fold = foldr (\ h t accum -> (t (cons accum h))) id fold nil head fold = fold (\ h t -> h) undefined nil = \ cons nil -> nil cons head fold = \ cons nil -> cons head (fold cons nil) tail fold = \ cons nil -> fold (\ h t g -> (g h (t cons))) (const nil) (\ h t -> t) reverse fold = \ cons nil -> foldl (flip cons) nil fold map fn = \ list cons -> list (cons . fn) sum fold = fold (+) 0 filter cond fold = \ cons nil -> fold (\ h t -> if cond h then cons h t else t) nil length fold = fold (const (+ 1)) 0 zipWith fn a b cons nil = (left # a) # ((right # fn) # b) where left = foldr (\ x xs cont -> (cont x xs)) (const nil) right = \ fn -> (foldr (\ y ys x cont -> (cons (fn x y) (cont ys))) (const (const nil))) (#) = unsafeCoerce -- :( see goo.gl/hLN88a
That way, we can just put all list algorithms in a single place, making it easier to remember and organize them. We can, then, specialize those to specific types by just using a typeclass:-- A type class for "things on which we can apply list algorithms on" class Foldable l a where fromFold :: Fold a -> l a toFold :: l a -> Fold a -- Generic list functions fmap f = fromFold . map f . toFold ffilter cond = fromFold . filter cond . toFold fzipWith fn a b = fromFold (zipWith fn (toFold a) (toFold b)) ... etc ...
Under this view, Foldable is just a recipe on how to convert from a type to a fold and back, so we can use list algorithms on it. And what is good about it is that writing a Foldable instance gives us all list algorithms - not just a very small subset, such as what the "fold as a summary" (whatever that is) view gives us. Of course, there is a point to make about performance. See this generic list-algorithm that multiplies all elements by 3, filters the odd ones, and then multiplies the remaining elements by 2:foo :: (Num a) => Foldable a -> Foldable a foo container = fmap (* 2) . ffilter odd . fmap (* 3) $ container
That is expanded to:foo container = fromFold . map (* 2) . toFold . fromFold . filter odd . toFold . fromFold . map (* 3) . toFold $ container
Which creates 2 intermediate structures and executes 3 O(N) operations. That would be horrible, but two very neat things will happen. First, since we know that toFold . fromFold == id, we can make a rewrite rule that eliminates that. This way, foo becomes:foo container = fromFold . map (* 2) . filter odd . map (* 3) . toFold $ container
Which eliminates the intermediate structures, getting us to the same performance we would expect if we programmed the functions for the original structure. Now, since none of our list fold functions is recursive, GHC can inline those at will. By just inlining the middle part - map (* 2) . filter odd . map (* 3) - we get this:foo container = fromFold . (\ fold cons nil -> (fold (\ head tail -> (if odd ((* 3) head) then (cons ((* 2) ((* 3) head)) tail) else tail)) nil)) . toFold $ container
Which, for better visualization, can be reconstructed as:middlePart fold cons nil = fold innerLoop nil where innerLoop head tail = if odd (head * 3) then cons (head * 3 * 2) tail else tail
What we can see here is that the 3 O(N) operations got fused into a single pass, the innerLoop. Even the (* 2) and the (* 3), which came from different maps, separated by a filter, managed to get in the same place. So, in other words, just writing that Foldable instance for a list-like type doesn't only give you access to all list algorithms, but a complete fusion framework for free. An example:instance Foldable  where fromFold fold = fold (:)  toFold (x : xs) cons nil = cons x (toFold xs cons nil) toFold  cons nil = nil main = do print $ (fmap (* 2) [1,2,3] :: [Int]) print $ (fzip [1,2,3] [4,5,6] :: [(Int,Int)]) -- Could just do the same for vectors, arrays, text, queues, perhaps set?
I'm not a type theoricist so I'm not sure if this has really terrible theoretical implications, but at least from an engineering point of view it looks better. Indeed, that might be very well the case as I couldn't get the right types for most of the functions for Fold and had to use unsafeCoerce in some places - but maybe more advanced features from the type system could do it correctly? I don't know. What do you think?submitted by SrPeixinho
[link] [37 comments]
I am just trying to get to understand haskell and I am stuck at State monad. First of all I am confused where it is defined (all the other monads I know about IO, Maybe, List, Functions seem to be easily accessible to me). Secondly I wish to write a function that reads input (using IO) and updates some inner state and once in a while prints the state. The infinite loop would probably have to be recursion of main :: IO (). Can anybody please (pretty pretty please) sketch such a function for me, so that I can analyze it. If yes, please use standard haskell types/functions so that I can trace them in prelude and stuff, rather than custom-made code (where possible). Thank you.submitted by jd823592
[link] [22 comments]
[link] [7 comments]
In my last post I showed how release dates are not a good way of inferring version bounds. The package repository should not make assumptions about what versions you have tested against. You need to tell it. But from what I've seen there are two problems with specifying version bounds:
- Lack of knowledge about how to specify proper bounds
- Unwillingness to take the time to do so
Early in my Haskell days, the first time I wrote a cabal file I distinctly remember getting to the dependencies section and having no idea what to put for the version bounds. So I just ignored them and moved on. The result of that decision is that I can no longer build that app today. I would really like to, but it's just not worth the effort to try.
It wasn't until much later that I learned about the PVP and how to properly let bounds. But even then, there was still an obstacle. It can take some time to add appropriate version bounds to all of a package's dependencies. So even if you know the correct scheme to use, you might not want to take the time to do it.
Both of these problems are surmountable. And in the spirit of doing that, I would like to propose a "cabal gen-bounds" command. It would check all dependencies to see which ones are missing upper bounds and output correct bounds for them. I have implemented this feature and it is available at https://github.com/mightybyte/cabal/tree/gen-bounds. Here is what it looks like to use this command on the cabal-install package:$ cabal gen-bounds Resolving dependencies... The following packages need bounds and here is a suggested starting point. You can copy and paste this into the build-depends section in your .cabal file and it should work (with the appropriate removal of commas). Note that version bounds are a statement that you've successfully built and tested your package and expect it to work with any of the specified package versions (PROVIDED that those packages continue to conform with the PVP). Therefore, the version bounds generated here are the most conservative based on the versions that you are currently building with. If you know your package will work with versions outside the ranges generated here, feel free to widen them. network >= 2.6.2 && < 2.7, network-uri >= 2.6.0 && < 2.7,
The user can then paste these lines into their build-depends file. They are formatted in a way that facilitates easy editing as the user finds more versions (either newer or older) that the package builds with. This serves to both educate users and automate the process. I think this removes one of the main frustrations people have about upper bounds and is a step in the right direction of getting more hackage packages to supply them. Hopefully it will be merged upstream and be available in cabal-install in the future.
The experiment and the poll comes first as I don't want to infect you with my idea before you answer the questions. If you are in the mood of reading a short story and answering a couple of questions, keep reading. In case you are only concerned with my ideas, you may skip the first part.
I won't give any discussion about the subject. I'm just throwing my ideas to the internet, be warned.
Part 1. The experiment
You have to estimate the effort needed to complete a particular task of software development. You may use any tool you'd like to do it, but you will only get as much information as I will tell you now. You will use all the technologies that you already know, so you won't have any learning curve overhead and you will not encounter any technical difficulty when doing the task.
Our customer is bothered by missing other co-workers birthdates. He wants to know all co-workers that are cellebrating birthday or just cellebrated, so he can send a "happy birthday" message at the very morning, when he just turned on his computer. To avoid sending duplicated messages, he doesn't want to see the same person on multiple days at the list.
Your current sofware system already have all workers of the company with birthdates and their relationship, so you can figure out pretty easily who are the co-workers of the user and when is the birthdate of everyone.
Now, stop reading further, take your time and estimate the effort of this task by answering the following poll.
<noscript>Estimate your effort</noscript>
Okay, now I'll give you more information about it and ask for your estimate again.
Some religions do not celebrate birthdates and some people get really mad when receiving a message of "happy birthday". To avoid this, you also need to check if the user wants to make its birthdate public.
By the way, the customer's company closes at the weekend, so you need to take into account that at monday you will need to show birthdates that happened at the weekend and not only of the current day.
This also applies to holidays. The holidays are a bit harder as it depends on the city of the employee, as they may have different holidays.
Oh, and don't forget to take into account that the user may have missed a day, so it needs to see everyone that he would on the day that he missed the job.
Now, take your time and estimate again.
<noscript>Estimate your effort - II</noscript>
Part 2. The Dunning-Kruger effect on estimates
I don't know if the little story above tricked you or not, but that same story tricked me in real-life. :)
The Dunning-Kruger effect is stated at Wikipedia as:
"[...] a cognitive bias wherein relatively unskilled individuals suffer from illusory superiority, mistakenly assessing their ability to be much higher than is accurate. This bias is attributed to a metacognitive inability of the unskilled to accurately evaluate their own ability level. Conversely, highly skilled individuals may underestimate their relative competence, erroneously assuming that tasks that are easy for them are also easy for others."
I'm seeing that this effect contributes to make the task of estimating effort to be completely innacurate by nature, as it always pulls to a bad outcome. If you know little about it, you will overestimate your knowledge and consequently underestimate the effort to accomplish it. If you know much, you will underestimate your knowledge and consequently overestimate the effort.
I guess one way to minimize this problem is to remove knowledge up to the point that you only have left the essential needed to complete the task. Sort of what Taleb calls "via negativa" in his Antifragile book.
What do you think? Does this makes any sense to you?