All the side effects were never mentioned to me
I am innocent of uncontrolled abuse
Admittedly it gets more complicated when summing two things at the same time:
let Pair dnormMean dnormNormMean =
fold (Pair <$> dimap (\(Pair _ dnormI) -> dnormI) (/ fromIntegral cc) sum
<*> dimap (\(Pair normBti dnormI) -> normBti * dnormI) (/ fromIntegral cc) sum)
$ map (\i -> Pair (((inp ! (off + i)) - meanBt) * rstdBt)
((weight ! i) * (dout ! (off + i))))
[0 .. cc - 1]
float dnorm_mean = 0.0f;
float dnorm_norm_mean = 0.0f;
for (int i = 0; i < C; i++) {
float norm_bti = (inp_bt[i] - mean_bt) * rstd_bt;
float dnorm_i = weight[i] * dout_bt[i];
dnorm_mean += dnorm_i;
dnorm_norm_mean += dnorm_i * norm_bti;
}
dnorm_mean = dnorm_mean / C;
dnorm_norm_mean = dnorm_norm_mean / C;
The machine itself can generally only do very simple things
I disagree. Assembly languages for modern architectures are a complexity hell. You need books with thousands of pages to explain how they work. In comparison the lambda calculus is much simpler.
AP probably stands for ActivityPub
Apparently there was still some manual action required: https://discourse.haskell.org/t/haskell-interlude-44-jose-manuel-calderon-trilla/8935/3?u=jaror
That’s odd. This latest episode is indeed not mentioned on there: https://feeds.buzzsprout.com/1817535.rss. I’d guess it is something on buzzsprout’s side.
The symbolic rewriting is interesting.
I do wonder what “modern-style” functional programming means.
Also their FAQ says:
But considering other FPLs like Haskell and ML, Pure’s library support isn’t bad
Clicking that link reveals a list of about 34 libraries. In comparison, Haskell’s current curated Stackage snapshot has 3340 packages in it (the total number of packages is probably more than 10x that). So, I think it is odd to claim its ecosystem is anywhere near Haskell’s.
We can make it a lot more performant, shorter, and also safer by using lazy byte strings:
{- cabal:
build-depends: base, network, network-run, bytestring
-}
{-# LANGUAGE OverloadedStrings #-}
import Network.Run.TCP (runTCPServer)
import qualified Network.Socket.ByteString.Lazy as Net
import qualified Data.ByteString.Lazy.Char8 as Str
main = runTCPServer (Just "127.0.0.1") "9999" $ \s -> do
request <- Net.getContents s
case Str.words (Str.takeWhile (/= '\r') request) of
["GET", resource, "HTTP/1.1"] -> do
let path = Str.concat
[ "htdocs/"
, Str.dropWhile (== '/') resource
, if Str.last resource == '/' then "index.html" else ""
]
page <- Str.readFile (Str.unpack path)
Net.sendAll s ("HTTP/1.1 200 OK\r\n\r\n" <> page)
_ -> error "todo"
Actually, if you combine network
with network-run
then it is the right level of abstraction:
{- cabal:
build-depends: base, network, network-run, monad-loops
-}
import Network.Run.TCP
import Network.Socket
import System.IO
import Control.Monad.Loops
main = runTCPServer (Just "127.0.0.1") "9999" talk where
talk s = do
h <- socketToHandle s ReadWriteMode
l <- hGetLine h
case words l of
["GET", resource, "HTTP/1.1"] -> do
whileM_ (("\r" /=) <$> hGetLine h) (pure ())
let path = concat
[ "htdocs/"
, dropWhile (== '/') resource
, if last resource == '/' then "index.html" else ""
]
hPutStr h "HTTP/1.1 200 OK\r\n\r\n"
hPutStr h =<< readFile path
hClose h
_ -> error "todo"
And for more GHCi performance options see: https://stackoverflow.com/a/77895561/15207568
Another way to put it is that HasCallStack isn’t optimized away by tail call optimization. And Haskell without tail call optimization will have huge stacks.
The discussion about incentives for stability was interesting. It reminded me of the maintainership standards proposal. I think it would be very useful to have Hackage show information like how quickly a package fixes version bounds when new versions of their dependencies are released.
@kosmikus @mangoiv I’m not really the right person to ask, having spent exactly zero time in industry. But I can imagine most industrial users have little interest in the main ICFP program and the other co-hosted workshops. So hosting the event separately at a smaller venue for just two days could make it possible to substantially lower the fees (and individual accommodation costs) which naturally makes the event more accessible. And I expect that the fees are generally a bigger problem outside of academia, so it cater more to industrial users and hobbyists.
This was a fun episode. I was introduced to breadth first labeling and attribute grammars by Doaitse Swierstra at the Applied Functional Programming summer school in Utrecht. He was an inspiring figure.
The biggest disadvantage of circular programs is that it is very easy to get into infinite loops when you make mistakes. I wish there was an easy way to guarantee statically that circular programs are terminating (perhaps using types).
There is also a recent paper about implementing breadth-first traversals without relying on laziness: https://www.cs.ox.ac.uk/people/jeremy.gibbons/publications/traversals.pdf. Unfortunately, that does not contain any benchmarks.
Maybe the symposium should start catering more to industrial users, now that Haskell itself also seems to be moving more in that direction (e.g. more backwards compatibility). The symposium already allows experience reports and demos.
Sadly, it seems things are not going so well for the Symposium: https://discourse.haskell.org/t/rfc-changes-to-the-haskell-symposium/8359?u=jaror
For more details on DerivingVia
, check out the paper:
https://ryanglscott.github.io/papers/deriving-via.pdf
Especially Section 4 which lists many use cases including the superclasses demonstrated in the video.
I think Idris’ bang notation for performing effects in a do-block is pretty, it could look like this:
main = do putStrLn ("You said: " ++ !getLine)
Today, you’d have to come up with a new variable name or figure out the right combinator names:
main = do line <- getLine; putStrLn ("You said: " ++ line)
main = putStrLn . ("You said: " ++) =<< getLine
But unfortunately there are more complicated cases:
main = do print (True || !getLine == "foo")
In a strict language with built-in short-circuiting logical operations the getLine
would never be performed, but in Haskell ||
is just a normal function that happens to be lazy in its second argument. The only reasonable way to implement it seems to be to treat every function as if it was strict and always perform the getLine
:
main = do line <- getLine; print (True || line == "foo")
Do you think this is confusing? Or is the bang notation useful enough that you can live with these odd cases? I’m not very happy with this naive desugaring.
My to-watch list:
Stream fusion does work:
data Stream a = forall s. Stream !(s -> Step s a) !s data Step s a = Yield a !s | Skip !s | Done data Tup a b = Tup !a !b cartesianProduct :: Stream a -> Stream b -> Stream (a, b) cartesianProduct (Stream step1 s01) (Stream step2 s02) = Stream step' s' where s' = Tup s01 s02 step' (Tup s1 s2) = case step1 s1 of Yield x s1' -> case step2 s2 of Yield y s2' -> Yield (x, y) (Tup s1 s2') Skip s2' -> Skip (Tup s1 s2') Done -> Skip (Tup s1' s02) Skip s1' -> Skip (Tup s1' s2) Done -> Done eft :: Int -> Int -> Stream Int eft x y = Stream step x where step s | s > y = Done | otherwise = Yield s (s + 1) fooS :: Stream (Int, Int) fooS = cartesianProduct (eft 0 10) (eft 0 10) toList :: Stream a -> [a] toList (Stream step s0) = go s0 where go !s = case step s of Yield x s' -> x : go s' Skip s' -> go s' Done -> [] foo :: [(Int,Int)] foo = toList fooS