2016 / 2017 / 2018 / 2019 / 2020 / 2021
- Day 1
- Day 2
- Day 3
- Day 4
- Day 5
- Day 6
- Day 7
- Day 8
- Day 9
- Day 10
- Day 11
- Day 12
- Day 13
- Day 14
- Day 15
- Day 16
- Day 17
- Day 18
- Day 19
- Day 20
- Day 21
- Day 22
- Day 23
- Day 24
- Day 25
Prompt / Code / Rendered / Standalone Reflection Page
So there's a simple-ish Haskell solution for these problems,
tails
lets you separate out each item in a list with the list of items after
it:
ghci> tails [1,2,3,4]
[1:[2,3,4], 2:[3,4], 3:[4], 4:[]]
findPair :: [Int] -> Maybe Int
findPair xs = listToMaybe $ do
x:ys <- tails xs
y <- ys
guard (x + y == 2020)
pure (x*y)
findTriple :: [Int] -> Maybe Int
findTriple xs = listToMaybe $ do
x:ys <- tails xs
y:zs <- tails ys
z <- zs
guard (x + y + z == 2020)
pure (x*y*z)
But this method is a little bit "extra", since we actually don't need to search
all of ys
for the proper sum...if we pick x
as 500
, then we really only
need to check if 1520
is a part of ys
.
So we really only need to check for set inclusion:
import qualified Data.IntSet as IS
findPair :: Int -> IS.IntSet -> Maybe Int
findPair goal xs = listToMaybe $ do
x <- IS.toList xs
let y = goal - x
guard (y `IS.member` xs)
pure (x * y)
And our first part will be findPair 2020
!
You could even implement findTriple
in terms of findPair
, using IS.split
to partition a set into all items smaller than and larger than a number.
Splitting is a very efficient operation on a binary search tree like IntSet
:
findTriple :: Int -> IS.IntSet -> Maybe Int
findTriple goal xs = listToMaybe $ do
x <- IS.toList xs
let (_, ys) = IS.split x xs
goal' = goal - x
case findPair goal' ys of
Nothing -> empty
Just yz -> pure (x*yz)
But hey...this recursive descent is kind of neat. We could write a general function to find any goal in any number of items!
-- | Given a number n of items and a goal sum and a set of numbers to
-- pick from, finds the n numbers in the set that add to the goal sum.
knapsack
:: Int -- ^ number of items n to pick
-> Int -- ^ goal sum
-> IS.IntSet -- ^ set of options
-> Maybe [Int] -- ^ resulting n items that sum to the goal
knapsack 0 _ _ = Nothing
knapsack 1 goal xs
| goal `IS.member` xs = Just [goal]
| otherwise = Nothing
knapsack n goal xs = listToMaybe $ do
x <- IS.toList xs
let goal' = goal - x
(_, ys) = IS.split x xs
case knapsack (n - 1) goal' ys of
Nothing -> empty
Just rs -> pure (x:rs)
And so we have:
part1 :: [Int] -> Maybe Int
part1 = knapsack 2 2020 . IS.fromList
part2 :: [Int] -> Maybe Int
part2 = knapsack 3 2020 . IS.fromList
And we could go on, and on, and on!
Definitely very unnecessary, but it does shave my time on Part 2 down from around 2ms to around 20μs :)
>> Day 01a
benchmarking...
time 5.564 μs (5.347 μs .. 5.859 μs)
0.987 R² (0.979 R² .. 1.000 R²)
mean 5.499 μs (5.390 μs .. 5.783 μs)
std dev 546.8 ns (238.7 ns .. 928.6 ns)
variance introduced by outliers: 87% (severely inflated)
* parsing and formatting times excluded
>> Day 01b
benchmarking...
time 51.91 μs (51.03 μs .. 53.43 μs)
0.988 R² (0.978 R² .. 0.995 R²)
mean 58.57 μs (56.07 μs .. 61.01 μs)
std dev 9.320 μs (8.111 μs .. 10.06 μs)
variance introduced by outliers: 93% (severely inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Day 2, not too bad for Haskell either :)
There is some fun in parsing here:
data Policy = P
{ pIx1 :: Int
, pIx2 :: Int
, pChar :: Char
, pPass :: String
}
parsePolicy :: String -> Maybe Policy
parsePolicy str = do
[ixes,c:_,pwd] <- pure $ words str
[ix1,ix2] <- pure $ splitOn "-" ixes
P <$> readMaybe ix1
<*> readMaybe ix2
<*> pure c
<*> pure pwd
I used one of my more regular do-block tricks: if you pattern match in a
Maybe
do-block, then failed pattern matches will turn the whole thing into a
Nothing
. So if any of those list literal pattern matches failed, the whole
block will return Nothing
.
In any case, we just need to write a function to check if a given policy is valid for either criteria:
countTrue :: (a -> Bool) -> [a] -> Int
countTrue p = length . filter p
validate1 :: Policy -> Bool
validate1 P{..} = n >= pIx1 && n <= pIx2
where
n = countTrue (== pChar) pPass
validate2 :: Policy -> Bool
validate2 P{..} = n == 1
where
n = countTrue (== pChar) [pPass !! (pIx1 - 1), pPass !! (pIx2 - 1)]
And so parts 1 and 2 are just a count of how many policies are true :)
part1 :: [Policy] -> Int
part1 = countTrue validate1
part2 :: [Policy] -> Int
part2 = countTrue validate2
>> Day 02a
benchmarking...
time 55.69 μs (55.61 μs .. 55.78 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 55.89 μs (55.82 μs .. 56.03 μs)
std dev 323.1 ns (232.5 ns .. 422.3 ns)
* parsing and formatting times excluded
>> Day 02b
benchmarking...
time 42.96 μs (42.88 μs .. 43.06 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 43.11 μs (43.07 μs .. 43.19 μs)
std dev 196.8 ns (94.94 ns .. 332.4 ns)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Here I'm going to list two methods --- one that involves pre-building a set to check if a tree is at a given point, and the other involves just a single direct traversal checking all valid points for trees!
First of all, I'm going to reveal one of my favorite secrets for parsing 2D ASCII maps!
asciiGrid :: IndexedFold (Int, Int) String Char
asciiGrid = reindexed swap (lined <.> folded)
This gives you an indexed fold (from the lens package) iterating over
each character in a string, indexed by (x,y)
!
This lets us parse today's ASCII forest pretty easily into a Set (Int, Int)
:
parseForest :: String -> Set (Int, Int)
parseForest = ifoldMapOf asciiGrid $ \xy c -> case c of
'#' -> S.singleton xy
_ -> S.empty
This folds over the input string, giving us the (x,y)
index and the character
at that index. We accumulate with a monoid, so we can use a Set (Int, Int)
to collect the coordinates where the character is '#'
and ignore all other
coordinates.
Admittedly, Set (Int, Int)
is sliiiightly overkill, since you could probably
use Vector (Vector Bool)
or something with V.fromList . map (V.fromList . (== '#')) . lines
, and check for membership with double-indexing. But I was
bracing for something a little more demanding, like having to iterate over all
the trees or something. Still, sparse grids are usually my go-to data
structure for Advent of Code ASCII maps.
Anyway, now we need to be able to traverse the ray. We can write a function to check all points in our line, given the slope (delta x and delta y):
countTrue :: (a -> Bool) -> [a] -> Int
countTrue p = length . filter p
countLine :: Int -> Int -> Set (Int, Int) -> Int
countLine dx dy pts = countTrue valid [0..322]
where
valid i = (x, y) `S.member` pts
where
x = (i * dx) `mod` 31
y = i * dy
And there we go :)
part1 :: Set (Int, Int) -> Int
part1 = countLine 1 3
part2 :: Set (Int, Int) -> Int
part2 pts = product $
[ countLine 1 1
, countLine 3 1
, countLine 5 1
, countLine 7 1
, countLine 1 2
] <*> [pts]
Note that this checks a lot of points we wouldn't normally need to check: any y
points out of range (322) for dy > 1
. We could add a minor optimization to
only check for membership if y
is in range, but because our check is a set
lookup, it isn't too inefficient and it always returns False
anyway. So a
small price to pay for slightly more clean code :)
So this was the solution I used to submit my original answers, but I started thinking the possible optimizations. I realized that we could actually do the whole thing in a single traversal...since we could associate each of the points with coordinates as we go along, and reject any coordinates that would not be on the line!
We can write a function to check if a coordinate is on a line:
validCoord
:: Int -- ^ dx
-> Int -- ^ dy
-> (Int, Int)
-> Bool
validCoord dx dy = \(x,y) ->
let (i,r) = y `divMod` dy
in r == 0 && (dx * i) `mod` 31 == x
And now we can use lengthOf
with the coordinate fold up there, which counts
how many traversed items match our fold:
countLineDirect :: Int -> Int -> String -> Int
countLineDirect dx dy = lengthOf (asciiGrid . ifiltered tree)
where
checkCoord = validCoord dx dy
tree pt c = c == '#' && checkCoord pt
And this gives the same answer, with the same interface!
part1 :: String -> Int
part1 = countLineDirect 1 3
part2 :: String -> Int
part2 pts = product $
[ countLineDirect 1 1
, countLineDirect 3 1
, countLineDirect 5 1
, countLineDirect 7 1
, countLineDirect 1 2
] <*> [pts]
Is the direct single-traversal method any faster?
Well, it's complicated, slightly. There's a clear benefit in the pre-built set
method for part 2, since we essentially build up an efficient structure (Set
)
that we re-use for all five lines. We get the most benefit if we build the set
once and re-use it many times, since we only have to do the actual coordinate
folding once.
So, directly comparing the two methods, we see the single-traversal as faster for part 1 and slower for part 2.
However, we can do a little better for the single-traversal method. As it
turns out, the lens indexed fold is kind of slow. I was able to write the
single-traversal one a much faster way by directly just using zip [0..]
,
without losing too much readability. And with this direct single traversal
and computing the indices manually, we get a much faster time for part 1 (about
ten times faster!) and a slightly faster time for part 2 (about 5 times
faster). The benchmarks for this optimized version are what is presented
below.
>> Day 03a
benchmarking...
time 241.3 μs (239.5 μs .. 244.2 μs)
0.998 R² (0.996 R² .. 1.000 R²)
mean 241.8 μs (239.8 μs .. 245.7 μs)
std dev 8.800 μs (3.364 μs .. 14.91 μs)
variance introduced by outliers: 33% (moderately inflated)
* parsing and formatting times excluded
>> Day 03b
benchmarking...
time 1.155 ms (1.124 ms .. 1.197 ms)
0.986 R² (0.967 R² .. 0.997 R²)
mean 1.235 ms (1.156 ms .. 1.496 ms)
std dev 434.4 μs (61.26 μs .. 910.6 μs)
variance introduced by outliers: 98% (severely inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
I almost hit the leaderboard today, but hit the 1 minute timeout because I
didn't read carefully enough to treat cid
as optional ;_;
Ah well, that's life!
Anyway, there are a lot of great Haskell solutions out there involving parser combinators and validation of different fields, stuff like that. My original solution parsed a map of fields to values, and then validated those values according to their keys.
But taking a step back from it all, I thought it would be a nice opportunity to try out the principal of Parse, Don't Validate and see if I can take it its extremes! And implementing this in a nice way lead me also to refinement types with the refined library, and also and the higher-kinded data pattern, supported by the barbies library.
So, what is "Parse, Don't Validate"? It means: instead of parsing your data into some structure and then checking if the structure is valid (like my original parse-a-map-then-check-it), try instead to represent your data in a structure where it is imposssible to represent or create an invalid instance in the first place. And so what was originally "validation" is now simply parsing your data into that correct-by-construction structure.
This seemed like a good candidate for the refined library, which gives us data types that are "literally" impossible to construct unless they are in the right shape.
-- | (a <-> b) will represent the type of an integer between a and b
type a <-> b = Refined (FromTo a b) Int
-- | (n ** a) will represent the type of a list of a's with exactly n elements
type n ** a = Refined (SizeEqualTo n) [a]
-- | These come included in the library
refineThrow :: Int -> Maybe (a <-> b)
refineThrow :: [a] -> Maybe (n ** a)
Which gives us a good picture for the type of our "correct-by-construction" passport:
data Height =
HCm (150 <-> 193)
| HIn ( 59 <-> 76)
data Eye = AMB | BLU | BRN | GRY | GRN | HZL | OTH
data Passport = Passport
{ pByr :: 1920 <-> 2002
, pIyr :: 2010 <-> 2020
, pEyr :: 2020 <-> 2030
, pHgt :: Height
, pHcl :: 6 ** (0 <-> 15)
, pEcl :: Eye
, pPid :: 9 ** (0 <-> 9)
}
Et voila! We now have a passport where it is impossible to construct unless you have all the correct components!
That's great and all, but...how do we actually parse our data type into this?
One way that could work is to parse each key-value pair into a Passport
with
all fields blank except for the field corresponding to that key-value pair, and
then combining those optional-field passports into a "certain" passport.
So we can imagine:
data PassportMaybe = PassportMaybe
{ pByrMaybe :: Maybe (1920 <-> 2002)
, pIyrMaybe :: Maybe (2010 <-> 2020)
, pEyrMaybe :: Maybe (2020 <-> 2030)
, pHgtMaybe :: Maybe Height
, pHclMaybe :: Maybe (6 ** (0 <-> 15))
, pEclMaybe :: Maybe Eye
, pPidMaybe :: Maybe (9 ** (0 <-> 9))
}
with an appropriate Monoid
instance that merges known fields together, and a
function like
fromPassportMaybe :: PassportMaybe -> Maybe Passport
that will only work if all the fields are Just
.
And hey, we would also maybe like to keep a collection of all the parsers so we can dispatch them whenever we want...
data PassportParser = PassportParser
{ pByrParser :: String -> Maybe (1920 <-> 2002)
, pIyrParser :: String -> Maybe (2010 <-> 2020)
, pEyrParser :: String -> Maybe (2020 <-> 2030)
, pHgtParser :: String -> Maybe Height
, pHclParser :: String -> Maybe (6 ** (0 <-> 15))
, pEclParser :: String -> Maybe Eye
, pPidParser :: String -> Maybe (9 ** (0 <-> 9))
}
And wait a minute ... doesn't part 1 require us to create a passport without validating the strings? So we also need to create
data PassportRaw = PassportRaw
{ pByrRaw :: String
, pIyrRaw :: String
, pEyrRaw :: String
, pHgtRaw :: String
, pHclRaw :: String
, pEclRaw :: String
, pPidRaw :: String
}
And also
data PassportRawMaybe = PassportRawMaybe
{ pByrRaw :: Maybe String
, pIyrRaw :: Maybe String
, pEyrRaw :: Maybe String
, pHgtRaw :: Maybe String
, pHclRaw :: Maybe String
, pEclRaw :: Maybe String
, pPidRaw :: Maybe String
}
as well, for the accumulation part? Wow, this sounds like a horrible idea!
Or...does it? What if we try the old higher-kinded data trick?
data Passport f = Passport
{ pByr :: f (1920 <-> 2002)
, pIyr :: f (2010 <-> 2020)
, pEyr :: f (2020 <-> 2030)
, pHgt :: f Height
, pHcl :: f (6 ** (0 <-> 15))
, pEcl :: f Eye
, pPid :: f (9 ** (0 <-> 9))
}
deriving (Generic)
Neat, huh? We now have a flexible data type that can account for all usage patterns! For example:
-- | the original
type FullPassport = Passport Identity
-- | the optional-field
type PassportMaybe = Passport Maybe
-- | the parser collection
newtype Parser a = Parser { runParser :: String -> Maybe a }
type PassportParser = Passport Parser
-- | the raw strings
newtype Const w a = Const { getConst :: w }
type PassportRaw = Passport (Const String)
-- | the optional raw strings
type PassportRaw = Passport (Const (Maybe String))
We get all of our original desired types, all from a single type definition, by
swapping out the functor f
we use! And then we can just use the
barbies library to convert between the different formats. Neat!
Well, what are we waiting for?
First, let's derive all of the instances necessary for our parsing to work, given by the barbies and one-liner-instances packages.
instance FunctorB Passport
instance ApplicativeB Passport
instance TraversableB Passport
instance ConstraintsB Passport
deriving via GMonoid (Passport f) instance AllBF Semigroup f Passport => Semigroup (Passport f)
deriving via GMonoid (Passport f) instance AllBF Monoid f Passport => Monoid (Passport f)
deriving instance AllBF Show f Passport => Show (Passport f)
Now we can write our parsers:
newtype Parser a = Parser { runParser :: String -> Maybe a }
passportParser :: Passport Parser
passportParser = Passport
{ pByr = Parser $ refineThrow <=< readMaybe
, pIyr = Parser $ refineThrow <=< readMaybe
, pEyr = Parser $ refineThrow <=< readMaybe
, pHgt = Parser $ \str ->
let (x, u) = span isDigit str
in case u of
"cm" -> fmap HCm . refineThrow =<< readMaybe x
"in" -> fmap HIn . refineThrow =<< readMaybe x
_ -> Nothing
, pHcl = Parser $ \case
'#':n -> refineThrow =<< traverse readHex n
_ -> Nothing
, pEcl = Parser $ readMaybe . map toUpper
, pPid = Parser $ refineThrow <=< traverse (refineThrow <=< readMaybe . (:[]))
}
where
readHex c
| isHexDigit c = refineThrow (digitToInt c)
| otherwise = Nothing
The usage of refineThrow
means that we use the machinery already defined in
the refined library to automatically check that our data is within the
given ranges...no need for manual range checking!
Now we can load a single key:val
token into a passport that is empty (all
fields are Const Nothing
) except for the value at the seen key
-- | Load a single "key:val" token into a passport
loadPassportField :: String -> Passport (Const (Maybe String))
loadPassportField str = case splitOn ":" str of
[k,v] -> case k of
"byr" -> mempty { pByr = Const (Just v) }
"iyr" -> mempty { pIyr = Const (Just v) }
"eyr" -> mempty { pEyr = Const (Just v) }
"hgt" -> mempty { pHgt = Const (Just v) }
"hcl" -> mempty { pHcl = Const (Just v) }
"ecl" -> mempty { pEcl = Const (Just v) }
"pid" -> mempty { pPid = Const (Just v) }
_ -> mempty
_ -> mempty
ghci> loadPassportField "eyr:1234"
Passport
{ pByr = Const Nothing
, pIyr = Const Nothing
, pEyr = Const (Just "1234")
, pHgt = Const Nothing
, pHcl = Const Nothing
, pEcl = Const Nothing
, pPid = Const Nothing
}
Now we can parse a field in its entirety by using bzipWith
(from barbies),
to "zip together" a Passport Parser
and Passport (Const (Maybe String))
with a given function that tells how to merge the values in any two fields.
parsePassportField :: String -> Passport Maybe
parsePassportField = bzipWith go passportParser . loadPassportField
where
go p (Const x) = runParser p =<< x
In the above, go
is run between each matching field in the Passport Parser
and the Passport (Const (Maybe String))
, and the overall effect is that each
string is run with the appropriate parser for its field.
ghci> parsePassportField "eyr:2025"
Passport
{ pByr = Nothing
, pIyr = Nothing
, pEyr = Just (refined 2025)
, pHgt = Nothing
, pHcl = Nothing
, pEcl = Nothing
, pPid = Nothing
}
ghci> parsePassportField "eyr:2050"
Passport
{ pByr = Nothing
, pIyr = Nothing
, pEyr = Nothing
, pHgt = Nothing
, pHcl = Nothing
, pEcl = Nothing
, pPid = Nothing
}
And the way the Monoid
instance works, we can just combine two Passport Maybe
s with <>
:
ghci> parsePassportField "eyr:2025" <> parsePassportField "ecl:brn"
Passport
{ pByr = Nothing
, pIyr = Nothing
, pEyr = Just (refined 2025)
, pHgt = Nothing
, pHcl = Nothing
, pEcl = Just BRN
, pPid = Nothing
}
Which gives us a nice function to parse a whole passport, with the help of
btraverse
to flip a Passport Maybe
into a Maybe (Passport Identity)
parsePassport :: String -> Maybe (Passport Identity)
parsePassport = btraverse (fmap Identity)
. foldMap parsePassportField
. words
The result of foldMap parsePassportField . words
is a Passport Maybe
, and
btraverse
"pulls out" all of the Just
fields and returns a Passport Identity
if all of the fields are Just
, failing with Nothing
if any of the
fields are Nothing
.
And...that's it for part 2!
-- | Get a list of all valid passports.
part2 :: String -> [Passport Identity]
part2 = mapMaybe parsePassport . splitOn "\n\n"
This works because we know that if we have a Passport Identity
, we know it
has to be a valid passport. It's physically impossible to create one that
isn't valid!
All hail "Parse, Don't Validate"!
And part 1 is a fun diversion: instead of a Passport Identity
, we want to
parse into a Passport (Const String)
instead. The mechanics are pretty much
the same:
loadPassport :: String -> Maybe (Passport (Const String))
loadPassport = btraverse (\(Const x) -> Const <$> x)
. foldMap loadPassportField
. words
The result of foldMap loadPassportField
is a Passport (Const (Maybe String))
, and so btraverse
will pull out all the Just
s again, returning a
Passport (Const String)
and failing if any of those values were Nothing
s.
Note the sliiight abuse of the Monoid
instance for Maybe
, which combines
strings by concatenation. But we're more concerned about whether or not it is
present than the actual contents of the string.
Anyway, here's wonderwall.
-- | Get a list of all complete passports field string values.
part1 :: String -> [Passport (Const String)]
part1 = mapMaybe loadPassport . splitOn "\n\n"
>> Day 04a
benchmarking...
time 1.424 ms (1.381 ms .. 1.491 ms)
0.987 R² (0.972 R² .. 0.999 R²)
mean 1.437 ms (1.410 ms .. 1.496 ms)
std dev 141.4 μs (52.48 μs .. 241.8 μs)
variance introduced by outliers: 71% (severely inflated)
* parsing and formatting times excluded
>> Day 04b
benchmarking...
time 4.212 ms (4.036 ms .. 4.512 ms)
0.985 R² (0.974 R² .. 1.000 R²)
mean 4.097 ms (4.039 ms .. 4.222 ms)
std dev 253.3 μs (50.40 μs .. 438.6 μs)
variance introduced by outliers: 39% (moderately inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
So, compared to yesterday's, this was decently chill :)
The main insight here probably is that the puzzle is just describing that the seat ID's are straight up binary notation for numerals, with F/L representing what is traditionally 0, and B/R representing what is traditionally 1. So we can use any of our binary parsers from the standard libraries, or we can just directly pull it into binary.
seatId :: String -> Int
seatId = foldl' iGuessWe'reDoingThis 0
where
iGuessWe'reDoingThis n = \case
'B' -> 2*n+1
'R' -> 2*n+1
_ -> 2*n
A nice one-pass way to find the missing seat ID is to realize that if we sum
all the numbers from min to max, and sum all of our lists's seat id's, then the
difference is the missing number. Luckily there's a nice closed-form solution
for the sum of all numbers in a given range (the sum of numbers from a
to b
is b*(b+1)`div`2 - a*(a-1)`div`2
), so we can do all of this in a single
pass using the foldl library
{-# LANGUAGE ApplicativeDo #-}
import qualified Control.Foldl as F
findHole :: F.Fold Int (Maybe Int)
findHole = do
mn <- F.minimum
mx <- F.maximum
sm <- F.sum
pure $
missingItem <$> mn <*> mx <*> pure sm
where
missingItem mn mx sm = totalSum - sm
where
totalSum = mx*(mx+1)`div`2 - mn*(mn-1)`div`2
A F.Fold Int (Maybe Int)
folds a list of Int
s into a Maybe Int
. You can
run it with F.fold :: F.Fold a b -> [a] -> b
.
I really like the foldl library because it lets you build a complex
single-pass fold by combining multiple simple single-pass folds (like
F.minimum
, F.maximum
, F.sum
) using an Applicative interface. We need to
do a bit of wrangling with the Maybe
s because F.minimum
and F.maximum
each return Maybe Int
.
And that's more or less it! We can actually represent the entire thing as a
fold if we use F.premap
, to pre-map a fold...
F.premap :: (c -> a) -> F.Fold a b -> F.Fold c b
-- "pre-apply" `setId` so we fold over a string instead
F.premap seatId findHole :: F.Fold String (Maybe Int)
And...that's enough to do it all in a single pass!
part1 :: [String] -> Maybe Int
part1 = F.fold $ F.premap seatId F.maximum
part2 :: [String] -> Maybe Int
part2 = F.fold $ F.premap seatId findHole
Bonus: I was tipped off that the 3rd from last digit of F/L are 1, while the same digit of B/R are 0:
ghci> (.&. 1) . (`shiftR` 2) . ord <$> "FLBR"
[1,1,0,0]
So we can actually use this for seatId
to get a slight speed boost and help
out the branch predictor maybe:
import Data.Bits
seatId :: String -> Int
seatId = foldl' iGuessWe'reDoingThis 0
where
iGuessWe'reDoingThis n c =
2 * n + (complement (ord c) `shiftR` 2) .&. 1
>> Day 05a
benchmarking...
time 17.30 μs (17.28 μs .. 17.35 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 17.32 μs (17.30 μs .. 17.37 μs)
std dev 89.27 ns (48.81 ns .. 150.8 ns)
* parsing and formatting times excluded
>> Day 05b
benchmarking...
time 18.84 μs (18.82 μs .. 18.85 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 18.84 μs (18.83 μs .. 18.86 μs)
std dev 56.33 ns (44.68 ns .. 77.97 ns)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Another day that is fairly straightforward in Haskell, I feel! But in other languages that support functional approaches, it should be straightforward as well.
The answer involves lists of groups of responses:
import Data.List.NonEmpty
import Data.Set
import qualified Data.List.NonEmpty as NE
import qualified Data.Set as S
type Response = Set Char
type Group = NonEmpty Response
parseAnswers :: Set Char -> [Group]
parseAnswers = mapMaybe ((fmap . fmap) S.fromList . NE.nonEmpty . lines)
. splitOn "\n\n"
And now we just need to decide how to aggregate each group. For part 1, this
requires a set union between every Response
in a Group
:
part1 :: [Group] -> Int
part1 = sum . map (S.size . foldr1 S.union)
(foldr1
here is safe because we have a non-empty container)
And for part 2, this requires a set intersection between every Response
in a
Group
:
part2 :: [Group] -> Int
part2 = sum . map (S.size . foldr1 S.intersection)
That's it!
>> Day 06a
benchmarking...
time 124.2 μs (122.7 μs .. 127.3 μs)
0.990 R² (0.970 R² .. 1.000 R²)
mean 125.7 μs (123.1 μs .. 130.8 μs)
std dev 13.18 μs (4.807 μs .. 23.01 μs)
variance introduced by outliers: 82% (severely inflated)
* parsing and formatting times excluded
>> Day 06b
benchmarking...
time 124.8 μs (123.9 μs .. 126.4 μs)
0.997 R² (0.991 R² .. 1.000 R²)
mean 125.4 μs (124.1 μs .. 127.8 μs)
std dev 6.333 μs (790.0 ns .. 11.65 μs)
variance introduced by outliers: 51% (severely inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Another AoC staple, a graph search that can be solved with recursive knot tying! The last one I remember off the top of my head was 2019 Day 6.
Here we can represent a graph as a map of vertices to other vertices, with an edge value:
type Graph v e = Map v (Map v e)
Exercise is left to the reader to parse our dataset into a Graph String Int
,
a graph of bags to bags with Int
edges.
Because our map has no cycles, we can take advantage of recursive knot tying to "fold up" all children and sub-children.
For example, part 1 can be written as:
allDescendants :: Ord v => Graph v e -> Map v (Set v)
allDescendants gr = descendantMap
where
descendantMap = gr <&>
M.foldMapWithKey (\v _ -> S.insert v (M.findWithDefault S.empty v descendantMap))
-- note: (<&>) is flip fmap
Here we "assume" we already have a fully-featured Map v (Set v)
map of
vertices to all their descendants, and then build descendantMap
in terms of
it. For every vertex v
in the Map v e
directly underneath a given vertex,
v
is a descendant, and also all of v
's descendants (which we find by
looking things up in descendantMap
, the map of all descendants).
Oh, um...oops, this found all the descendants, but we want all of the ancestors. So we have to flip the graph if we want to use this.
flipGraph :: Ord v => Graph v e -> Graph v e
flipGraph mp = M.fromListWith M.union
[ (m, M.singleton n e)
| (n, ms) <- M.toList mp
, (m, e ) <- M.toList ms
]
allAncestors :: Ord v => Graph v e -> Map v (Set v)
allAncestors = allDescendants . flipGraph
And so that leaves Part 1 as:
part1 :: Graph String (String Int) -> Maybe (Set String)
part1 = M.lookup "shiny gold" . allAncestors
Part 2 we can do a similar way, by "assuming" we have a map of all vertices to their "usage count", and looking things up to build it:
usageCounts :: Ord v => Graph v Int -> Map v Int
usageCounts gr = usageMap
where
usageMap = gr <&> \neighbors -> sum
[ n * (M.findWithDefault 0 v usageMap + 1)
| (v, n) <- M.toList neighbors
]
So to find the total usage of each bag, we look under each (v, Int)
pair in the
Map v Int
underneath a given vertex, look up the usage of that v
(by
looking it up in usageMap
), add 1 (because the bag itself is used), and
multiply by n
, the number of times the full contents of the bag is used.
And so Part 2 is:
part2 :: Graph String (String Int) -> Maybe Int
part2 = M.lookup "shiny gold" . usageCounts
If we stare at the two implementations, we note that both are pretty much the
same overall structure: we are accumulating some sort of fold over all
descendants of a given node. If we "outsource" this accumulation as a monoidal
one (for part 1, it's Set
union, and for part 2, it's Sum Int
addition), we
can needlessly hyper-generalize this to fold over any Monoid
instance.
-- | Recursively fold up a monoid value for each vertex and all of its
-- children's monoid values. You can transform the value in-transit before it
-- is accumulated if you want.
foldMapGraph
:: (Ord v, Monoid m)
=> (v -> m) -- ^ embed the vertex
-> (e -> m -> m) -- ^ transform with edge before it is accumulated
-> Graph v e
-> Map v m
foldMapGraph f g gr = res
where
res = gr <&>
M.foldMapWithKey (\s v -> f s <> foldMap (g v) (M.lookup s res))
allDescendants :: Ord v => Graph v e -> Map v (Set v)
allDescendants = foldMapGraph
S.singleton -- the node is embedded as itself
(\_ -> id) -- ignore the edge
usageCounts :: Ord v => Graph v Int -> Map v (Sum Int)
usageCounts = foldMapGraph
(const 0) -- ignore the nodes
(\n x -> Sum n * (x + 1)) -- the edge multiplies the accumulator plus one
That's the curse of Haskell, I guess? If you write these things you can't help but notice the common patterns, and you somehow wind up trying to figure out the higher-order function that can abstract over them, even though you know you don't need to :)
>> Day 07a
benchmarking...
time 2.423 ms (2.265 ms .. 2.631 ms)
0.980 R² (0.967 R² .. 1.000 R²)
mean 2.271 ms (2.245 ms .. 2.334 ms)
std dev 136.8 μs (48.17 μs .. 231.7 μs)
variance introduced by outliers: 42% (moderately inflated)
* parsing and formatting times excluded
>> Day 07b
benchmarking...
time 12.11 μs (11.77 μs .. 12.51 μs)
0.991 R² (0.987 R² .. 0.995 R²)
mean 12.23 μs (11.88 μs .. 12.69 μs)
std dev 1.266 μs (913.5 ns .. 1.695 μs)
variance introduced by outliers: 87% (severely inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Nothing tooooo complicated about today's, I feel: it is another staple of AoC --- simulating a virtual machine! :) Only this time our program is separate from our memory, so we don't have any actual self-modifying code. However, my guard is up: this might turn into one of those soon in another day.
At least, there are some interesting things we can do to prepare for a
potential switch to different requirements in a later day (with the Ixed
)
typeclass, and also a nice way to handle the perturbations in Part 2 using
holesOf
and lens traversal composition.
My main program was a sequence of Command
:
data Instr = NOP | ACC | JMP
type Command = (Instr, Int)
But, what container should we use for these?
[Command]
: Nope, bad, literally no reason to ever use this except for O(1) push and pop. The main operation here is indexing, and it's O(i) on the index.Vector Command
: Very fast indexing (O(1) on the index), but very bad for any sort of addition of new instructions in-flight if that comes up in the future. But good enough for now.Seq Command
: Efficient indexing (O(1) on the index), and very good for adding new instructions to either end (or even in the middle) in-flight if it comes to that.IntMap Command
: Efficient indexing (O(1) on the index), very good for adding new instructions to either end, and also good for a sparse program bank if it ever comes to that.
Luckily, we can get a common interface for all four of these options by using
the Ixed
typeclass from the lens library, which abstracts over different
"indexable" things. You'd get a safe index with xs ^? ix i
. So whenever
possible, I've written all my code to work generally over all four of these in
case I have to swap quickly in the future.
One theoretical nice container would actually be the PointedList
data type
(one implementation is in the pointedlist library). This is because all
of our addressing is relative, so instead of storing a "current index", we
could just always point towards the focus of the tape, and shift the tape left
or right for JMP
.
However, this is kind of difficult to adapt to work in a uniform interface to the other four types...so, goodbye theoretical nicety, sacrificed in the name of adaptivity :'(
So for my solution I used Vector
, which has just the API necessary without
the extra flexibility that Seq
and IntMap
offer, since we don't need it!
But, just know that things could be swapped at any time, thanks to the magic
(or horror, depending on your point of view) of typeclasses.
On the other hand, if we separate out the index from a fixed container, it does make the state a lot simpler. It means that our state is really only the current pointer and the accumulator:
data CState = CS { csPtr :: !Int, csAcc :: !Int }
initialCS :: CState
initialCS = CS 0 0
runCommand :: Vector Command -> CState -> Maybe CState
So our actual program becomes a very tight CState -> Maybe CState
loop --
very efficient because the state is only a tuple! That means that we can
simply chain things using iterateMaybe
go get a list of all successive
states:
-- | A handy utility function I keep around
iterateMaybe :: (a -> Maybe a) -> a -> [a]
iterateMaybe f = go
where
go x = x : case f x of
Nothing -> []
Just y -> go y
allStates :: Vector Command -> [CState]
allStates cmd = iterateMaybe (runCommand cmd) initialCS
So now we have a generator of all the states a given program bank will ever output. For part 1, we just need to find a loop. Luckily I have another handy utility function that scans a list and reports the first time a projection function's result is repeated
-- | Lazily find the first repeated projection.
firstRepeatedBy :: Ord a => (b -> a) -> [b] -> Maybe b
firstRepeatedBy f = go S.empty
where
go seen (x:xs)
| f x `S.member` seen = Just x
| otherwise = go (f x `S.insert` seen) xs
go _ [] = Nothing
part1 :: Vector Command -> Maybe CState
part1 cmd = firstRepititionBy csPtr states
where
states = iterateMaybe (runCommand cmd) inititialCS
Now all that's left is to actually implement runCommand
!
runCommand
:: Vector Command
-> CState
-> Maybe CState
runCommand cmds cs = (cmds ^? ix (csPtr cs)) <&> \case
(NOP, _) -> cs { csPtr = csPtr cs + 1 }
(ACC, i) -> cs { csPtr = csPtr cs + 1, csAcc = csAcc cs + i }
(JMP, i) -> cs { csPtr = csPtr cs + i }
-- note: <&> is flip fmap
And the nice thing about it is that if we leave off the type annotation of
runCommand
, we actually get a really nice polymorphic type if we ask GHC what
it expects:
runCommand
:: (Ixed t, Index t ~ Int, IxValue t ~ (Instr, Int))
=> t
-> CState
-> Maybe CState
This is the fully polymorphic signature that you get just from using cmds ^? ix (csPtr cs)
. It says that you can use this on any program bank t
that's
an instance of Ixed
, as long as its index type is Int
and the value at that
index is a (Instr, Int)
. Nothing about the typeclasses here is inherently
lensy, it's just a typeclass (like any other) to abstract over common
interfaces that many types might have. In this fully polymorphic signature, we
can use this on Vector Command
, [Command]
, Seq Command
, and IntMap Command
, as we wish to in the future if the need comes up.
For part 2 we can take advantage of some actual lens/optics magic, by using
holesOf
:
holesOf
:: Traversal' s a
-> s
-> [Pretext (->) a a s]
The type is definitely scary, but holesOf
is saying:
- Give me a specification of which holes you want to poke (
Traversal' s a
, a values
with holesa
) - ... and an item you want to poke the holes in (
s
) - ... and I'll return to you a list of continuations (
Pretext (->) a a (t a)
), each one allowing you to edit a different hole ins
.
Pretext
is a bit of a complicated type, but the main interface you would use
it with is:
peeks :: (a -> a) -> Pretext (->) a a s -> s
peeks
as for a function you would want to run on a hole (the a -> a
), the
continuation you got from holesOf
, and then returns the "modified" s
,
modified according to that transformation you ran on that hole.
(thanks to mniip on freenode IRC for pointing out how these two work together to me!)
Every item in the list returned by holesOf
corresponds to a different hole,
so for example:
ghci> map (peeks negate) (holesOf traverse [1,2,3])
[ [-1, 2, 3]
, [ 1,-2, 3]
, [ 1, 2,-3]
]
The traverse :: Traversal' [a] a
is a Traversal
that specifies the "holes"
of a list [a]
to be each item a
in that list. And so holesOf traverse [1,2,3]
will return three Pretext
s: one corresponding to modifying each item
in the list individually.
peeks negate
on each of the three items returned by holesOf traverse [1,2,3]
will return the modified list, each with a single hole edited by
negate
.
In our case, instead of negate
, we can use a flipInstr
that flips NOP
to
JMP
and JMP
to NOP
:
flipInstr :: Command -> Command
flipInstr = \case
NOP -> JMP
ACC -> ACC
JMP -> NOP
And now peeks flipInstr
will do the right thing:
ghci> map (peeks flipInstr) (holesOf traverse [NOP,ACC,JMP,JMP])
[ [JMP,ACC,JMP,JMP]
, [NOP,ACC,JMP,JMP]
, [NOP,ACC,NOP,JMP]
, [NOP,ACC,JMP,NOP]
]
An extra coolio thing is that traversals compose with .
, so we can actually
use a traversal _1
(here, Traversal' (a,b) a
, which says the single "hole"
in an (a,b)
is the first item in the tuple) to be more nuanced with our hole
selection:
ghci> map (peeks flipInstr)
(holesOf (traverse . _1) [(NOP,1),(ACC,2),(JMP,3),(JMP,4)])
[ [(JMP,1),(ACC,2),(JMP,3),(JMP,4)]
, [(NOP,1),(ACC,2),(JMP,3),(JMP,4)]
, [(NOP,1),(ACC,2),(NOP,3),(JMP,4)]
, [(NOP,1),(ACC,2),(JMP,3),(NOP,4)]
]
Neat!
With that we can fully write part2
: for each perturbation, check if there is
a loop. If there is a loop, this ain't it. If there isn't a loop, then we hit
the jackpot: return the last item in our list of seen states, as that's the
last state before termination.
part2 :: Vector Command -> Maybe CState
part2 cmds0 = listToMaybe
[ res
| cmds <- peeks flipInstr <$> holesOf (traverse . _1) cmds0
, let states = iterateMaybe (runCommand cmds) initialCS
, res <- case firstRepeatedBy csPtr stats of
Nothing -> [last states] -- loop found
Just _ -> [] -- no loop found
]
In my actual code, I actually use the experiment
function instead of peeks
-- it's like a "peeksM", in a way:
peeks :: (a -> a) -> Pretext (->) a a s -> a
experiment :: (a -> f a) -> Pretext (->) a a s -> f a
So instead of giving it a Instr -> Instr
, you could give it an Instr -> Maybe Instr
, and "cancel out" any branches that don't need to be addressed:
experiment :: (a -> Maybe a) -> Pretext (->) a a s -> Maybe a -- in our case
flipInstrs :: Command -> Maybe Command
flipInstrs = \case
NOP -> Just JMP
ACC -> Nothing -- for ACC indices, don't do anything
JMP -> Just JMP
ghci> map (experiment flipInstrs)
(holesOf (traverse . _1) [(NOP,1),(ACC,2),(JMP,3),(JMP,4)])
[ Just [(JMP,1),(ACC,2),(JMP,3),(JMP,4)]
, Nothing
, Just [(NOP,1),(ACC,2),(NOP,3),(JMP,4)]
, Just [(NOP,1),(ACC,2),(JMP,3),(NOP,4)]
]
part2 :: Vector Command -> Maybe CState
part2 cmds0 = listToMaybe
[ res
| Just cmds <- experiment flipInstr <$> holesOf (traverse . _1) cmds0
, let states = iterateMaybe (runCommand cmds) initialCS
, res <- case firstRepeatedBy csPtr stats of
Nothing -> [last states] -- loop found
Just _ -> [] -- no loop found
]
Not a super huge improvement, but maybe more theoretically nice because we can
skip over the possible trials where we are permuting an ACC
. By my
reckoning, 52% of my input file instructions were ACC instructions, so this
small thing actually shaves off a decent amount of time.
>> Day 08a
benchmarking...
time 6.243 μs (6.182 μs .. 6.346 μs)
0.998 R² (0.996 R² .. 1.000 R²)
mean 6.210 μs (6.180 μs .. 6.325 μs)
std dev 192.8 ns (60.82 ns .. 390.0 ns)
variance introduced by outliers: 38% (moderately inflated)
* parsing and formatting times excluded
>> Day 08b
benchmarking...
time 2.473 ms (2.298 ms .. 2.654 ms)
0.967 R² (0.953 R² .. 0.984 R²)
mean 2.485 ms (2.401 ms .. 2.589 ms)
std dev 298.9 μs (248.6 μs .. 339.1 μs)
variance introduced by outliers: 74% (severely inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Let's tackle day 9!
A good way to check if a sequence of 25 numbers can add to the 26th number is to just iterate over everything, like we might have done in day 1:
-- | check if, for ([x,y,z] ++ [a]), no pair in xyz can add to 'a'. If it's
-- bad, it returns 'Just a'.
isBad :: [Int] -> Maybe Int
isBad xs0 = do
x : xs <- Just $ reverse xs0
let badCheck = null do
y:ys <- tails (toList xs)
z <- ys
guard $ (y + z) == x
x <$ guard badCheck
I use my favorite Maybe
do-notation trick of pattern matching within the
block to take advantage of do block short circuiting for Maybe
with its
MonadFail
instance. If you reverse xs0
then you can get the last item as
the head, and the rest as the tail :)
In badCheck
we do a list-monad powered search (see my Day 1
Reflections)
for more details on how it works. badCheck
will return True
if the search
is empty (with null
). guard badCheck
will be Nothing if badCheck
fails
(and our list is good) and Just x
if badCheck
succeeds (and our list is
bad).
Part 1 is then just finding the first bad sequence:
part1 :: [Int] -> Maybe Int
part1 xs = listToMaybe
[ y
| ys <- tails xs
, Just y <- [isBad (take 26 ys)]
]
For part 2, there's a nice-ish way to do it in constant-time. First, we can
generate a cumulative sum cumSum
for the entire list. Then we know that
sumFrom(i,j)
in our original list is just cumSum(j) - cumSum(i)
. This is
similar to how definite integrals work, or also how you can find the area under
a probability density function by subtracting two points from its cumulative
distribution function.
So now the problem just becomes finding i,j
where cumSum(j) - cumSum(i) == goal
. There's a clean imperative-ish way to do this that involves just
"sliding" your window i,j
up from 0,1
. If cumSum(j) - cumSum(i)
is too
small, increase j
by 1 to open the window up a bit. If it's too big,
increase i
by 1 to close the window up a bit.
findBounds :: V.Vector Int -> Int -> Maybe (Int, Int)
findBounds ns goal = go 0 1
where
go !i !j = do
x <- ns V.!? i
y <- ns V.!? j
case compare (y - x) goal of
LT -> go i (j + 1)
EQ -> pure (i, j)
GT -> go (i + 1) j
And there you go!
part2 :: [Int] -> Maybe Int
part2 xs = do
goal <- part1 xs
let cumSum = V.fromList (scanl' (+) 0 xs) -- cumulative sum
(i, j) <- findBounds cumSum goal
let xs = take (j - i) . drop i $ ns
pure $ minimum xs + maximum xs
If anything, maybe the implementation of findBounds
shows how one might
directly translate a tight mutable loop in an imperative language into a
tail-recursive function in Haskell!
We do often like to avoid explicitly writing recursive functions when we can, but in this case I'm not sure if there's a way to get around it other than switching to a full on mutable answer, or in a very complex way that is extremely specific to the situation. If you think of one, let me know! :D
>> Day 09a
benchmarking...
time 153.6 μs (148.1 μs .. 162.6 μs)
0.988 R² (0.980 R² .. 1.000 R²)
mean 151.5 μs (149.0 μs .. 156.8 μs)
std dev 12.69 μs (5.899 μs .. 21.86 μs)
variance introduced by outliers: 74% (severely inflated)
* parsing and formatting times excluded
>> Day 09b
benchmarking...
time 172.0 μs (169.9 μs .. 175.0 μs)
0.998 R² (0.994 R² .. 1.000 R²)
mean 170.4 μs (169.5 μs .. 174.1 μs)
std dev 5.863 μs (3.130 μs .. 10.90 μs)
variance introduced by outliers: 32% (moderately inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Today is another day where the "automatically build a memoized recursive map" in Haskell really shines :) It's essentially the same problem as Day 7.
For the first part, once you sort the list, you can compute the differences and then build a frequency map
-- | Build a frequency map
freqs :: Ord a => [a] -> Map a Int
freqs = M.fromListWith (+) . map (,1) . toList
diffs :: [Int] -> [Int]
diffs xs@(_:ys) = zipWith (-) ys xs
ghci> diffs [1,3,4,7]
[2,1,3]
And so part 1 can be done with:
part1 :: [Int] -> Int
part1 xs = (stepFreqs M.! 1) * (stepFreqs M.! 3)
where
xs' = 0 : xs ++ [maximum xs + 3]
stepFreqs = freqs (diffs (sort xs'))
For part 2, if we get an IntSet
of all of your numbers (and adding the zero,
and the goal, the maximum + 3), then we can use it to build our IntMap
of all
the number of paths from a given number.
import Data.IntMap (IntMap)
import Data.IntSet (IntSet)
import qualified Data.IntMap as IM
import qualified Data.IntSet as IS
-- | A map of numbers to the count of how many paths from that number to
-- the goal
pathsToGoal :: IntSet -> IntMap Int
pathsToGoal xs = res
where
res = flip IM.fromSet xs $ \i ->
if i == goal
then 1
else sum [ IM.findWithDefault 0 (i + j) res
| j <- [1,2,3]
]
goal = IS.findMax is
Our answer is res
, the map of numbers to the count of how many paths exist
from that number to the goal. To generate the count for a given number i
, we
add the number of paths from i+1
, i+2
, and i+3
. We get that count by
looking it up in res
!
part2 :: [Int] -> Int
part2 xs = pathsToGoal xs IM.! 0
where
xs' = IS.fromList (0 : xs ++ [maximum xs + 3])
A quick note --- after some discussion on the irc, we did find a closed-form solution...I might be editing this to implement it in Haskell eventually :)
>> Day 10a
benchmarking...
time 6.240 μs (6.090 μs .. 6.639 μs)
0.985 R² (0.964 R² .. 0.999 R²)
mean 6.843 μs (6.274 μs .. 7.805 μs)
std dev 2.589 μs (1.164 μs .. 3.977 μs)
variance introduced by outliers: 99% (severely inflated)
* parsing and formatting times excluded
>> Day 10b
benchmarking...
time 9.300 μs (8.801 μs .. 10.10 μs)
0.979 R² (0.961 R² .. 1.000 R²)
mean 9.003 μs (8.778 μs .. 9.453 μs)
std dev 1.001 μs (176.6 ns .. 1.635 μs)
variance introduced by outliers: 89% (severely inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
My first day on the leaderboard! :D 21 / 352. Had a big dip on my second part because I had some silly typos that were difficult to catch in the moment D:
After refactoring things, I realized that part 1 and part 2 are really the same, with only two differences:
- Each point as a different neighborhood set (in part 1, it's the immediate neighbors; in part 2, it's all of the line-of-sights in each direction).
- Threshold for seats unseating is 4 for part 1 and 5 for part 2.
So let's write our function parameterized on those two. We'll be storing our
world as a Map Point Bool
, where False
represents an empty seat and True
represents a full one. Floor points are not included in the map.
-- | A 2-vector type from the linear library, with a very convenient Num
-- instance.
data V2 a = V2 a a
type Point = V2 Int
-- | A useful utility function I keep around that counts the number of items in
-- a container matching a predicate
countTrue :: Foldable f => (a -> Bool) -> f a -> Int
countTrue p = length . filter p . toList
seatRule
:: Int -- ^ exit seat threshold
-> Map Point (Set Point) -- ^ neighbors for each point
-> Map Point Bool
-> Map Point Bool
seatRule thr nmp mp = M.intersectionWith go nmp mp
where
go neighbs = \case
Empty -> not (all (mp M.!) neighbs)
Full ->
let onNeighbs = countTrue (mp M.!) neighbs
in not (onNeighbs >= thr)
Now we just need to create our neighborhood maps.
-- | The eight immediate neighbors around 0,0
immediateNeighbs :: [Point]
immediateNeighbs =
[ V2 dx dy
| dx <- [-1 .. 1]
, dy <- if dx == 0 then [-1,1] else [-1..1]
]
-- | From a set of seat locations, get a map of points to all of those points'
-- neighbors where there is a seat. Should only need to be computed once.
lineOfSights1
:: Set Point
-> Map Set (Set Point)
lineOfSeights1 pts = M.fromSet go mp
where
go p _ = S.fromList
. filter (`S.member` pts)
. (+ p)
$ immediateNeighbs
-- | From a set of seat locations, Get a map of points to all of those points'
-- visible neighbors. Should only need to be computed once.
lineOfSights2
:: Set Point
-> Map Point (Set Point)
lineOfSights2 bb pts = M.mapWithKey go pts
where
go p _ = S.fromList
. mapMaybe (los p)
$ immediateNeighbs
los p d = find (`S.member` pts)
. takeWhile inBoundingBox
. tail
$ iterate (+ d) p
inBoundingBox = all (inRange (0, 99))
-- inRange from Data.Ix
-- all from Data.Foldable and V2's Foldable instance
(I hard-coded the bounds here, but in my actual solution I inferred it from the input.)
Now to solve!
-- | Handy utility function I have; repeat a function until you get the same
-- result twice.
fixedPoint :: Eq a => (a -> a) -> a -> a
fixedPoint f = go
where
go !x
| x == y = x
| otherwise = go y
where
y = f x
solveWith
:: Int -- ^ exit seat threshold
-> Map Point (Set Point) -- ^ neighbors for each point
-> Map Point Bool -- ^ initial state
-> Int -- ^ equilibrium size
solveWith thr neighbs = countTrue id . fixedPoint (seatRule thr neighbs)
part1
:: Map Point Bool
-> Int
part1 mp = solveWith 4 los mp
where
los = lineOfSight1 (M.keysSet mp)
part2
:: Map Point Bool
-> Int
part2 mp = solveWith 5 los mp
where
los = lineOfSight2 (M.keysSet mp)
>> Day 11a
benchmarking...
time 133.7 ms (125.9 ms .. 142.4 ms)
0.994 R² (0.982 R² .. 0.999 R²)
mean 133.6 ms (128.6 ms .. 138.2 ms)
std dev 7.158 ms (4.642 ms .. 10.49 ms)
variance introduced by outliers: 11% (moderately inflated)
* parsing and formatting times excluded
>> Day 11b
benchmarking...
time 128.9 ms (115.0 ms .. 142.0 ms)
0.985 R² (0.962 R² .. 0.998 R²)
mean 129.8 ms (125.0 ms .. 137.1 ms)
std dev 9.339 ms (5.576 ms .. 12.80 ms)
variance introduced by outliers: 23% (moderately inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Hello! Today's puzzle for me ended up a neat exercise in fitting together simple parts into something fun.
To preface this, I do usually represent all my coordinates using V2 Int
from
the linear library, which
supports addition and scaling:
data V2 a = V2 !a !a
type Point = V2 Int
-- | You can add points using the Num instance
(+) :: Point -> Point -> Point
-- | You can do scaling
(*^) :: Int -> Point -> Point
And I have a utility type that represents a compass direction:
data Dir = North | East | South | West
dirPoint :: Dir -> Point
dirPoint = \case
North -> V2 0 1
East -> V2 1 0
South -> V2 0 (-1)
West -> V2 (-1) 0
rotPoint :: Num a => Dir -> V2 a -> V2 a
rotPoint = \case
North -> id
East -> \(V2 x y) -> V2 y (-x)
West -> \(V2 x y) -> V2 (-y) x
South -> negate
And I do like to define a Group
interface for my Dir
type, just for fun.
-- | If you consider a Dir as a turn, then `mulDir a b` is like turning a, then
-- turning b.
mulDir :: Dir -> Dir -> Dir
mulDir North = id
mulDir East = \case North -> East
East -> South
South -> West
West -> North
mulDir South = \case North -> South
East -> West
South -> North
West -> East
mulDir West = \case North -> West
East -> North
South -> East
West -> South
-- | '<>' is 'mulDir'.
instance Semigroup Dir where
(<>) = mulDir
-- | If you consider Dir as a turn, then turning by North is the same as not
-- turning at all.
instance Monoid Dir where
mempty = North
-- | Reverse a turn. Not needed for this puzzle, but still useful in general.
instance Group Dir where
invert = \case North -> South
East -> West
South -> North
West -> East
I did not write any of this for the puzzle --- this is just a nice way I like to think about directions and points in my head :)
One major advantage of defining a Semigroup
instance for Dir
is that you can
take advantage of the pow
function from
Data.Group:
pow :: Group m => m -> Int -> m
which is like stimes
, but supporting negative numbers. pow x 3
is x <> x <> x
, and pow x (-3)
is invert x <> invert x <> invert x
, or invert (x <> x <> x)
(same thing, 'cause Group theory). We don't actually need the support
for negative numbers in this puzzle, so we could just use stimes
, but it's
nice that we can just use pow
and not think about our input range. And,
though it doesn't matter for this challenge, it also uses repeated
squaring so it can
do these operations in log-n time (pow x 1000000000
only takes 30
operations), which is pretty neat for a lot of different applications (like in
my writeup for 2019 Day
22).
Anyway I think that's enough preamble...now let's use it! :D Each instruction seems to be one of three forms: "go forward", "turn", or "move an absolute vector". So I represented these three as a data type, parameterized by the amount to go forward, the direction to turn, and the vector to move by, respectively.
And each first character gives us a different way to process the Int
argument, so I stored those instructions in a Map
. Then we can parse it by
just using readMaybe :: Read a => String -> Maybe a
on a pattern match.
data Instr = Forward Int
| Turn Dir
| Move Point
deriving Show
-- | A map of a Char to the way to interpret the Int argument
mkInstr :: Map Char (Int -> Instr)
mkInstr = M.fromList
[ ('F', Forward)
, ('L', Turn . pow West . (`div` 90))
, ('R', Turn . pow East . (`div` 90))
, ('N', Move . (*^ dirPoint North))
, ('S', Move . (*^ dirPoint South))
, ('E', Move . (*^ dirPoint East ))
, ('W', Move . (*^ dirPoint West ))
]
parseInstr :: String -> Maybe Instr
parseInstr [] = Nothing
parseInstr (c:n) = M.lookup c mkInstr <*> readMaybe n
ghci> parseInstr "F30"
Forward 30
ghci> parseInstr "L270"
Turn East
ghci> parseInstr "N15"
Move (V2 0 15)
And now part 1, part 2 are basically just different ways of folding through a list of instructions:
toInstrs :: String -> [Instr]
toInstrs = traverse parseInstr . lines
-- | Use (ship heading, position) as the state
part1 :: [Instr] -> (Dir, Point)
part1 = foldl' go (East, V2 0 0)
where
go :: (Dir, Point) -> Instr -> (Dir, Point)
go (!dir, !p) = \case
Forward n -> (dir , p + n *^ dirPoint dir)
Turn d -> (dir <> d, p )
Move r -> (dir , p + r )
-- | Use (ship position, waypoint vector from ship) as the state
part2 :: [Instr] -> (Point, Point)
part2 = foldl' go (V2 0 0, V2 10 1)
where
go :: (Point, Point) -> Instr -> (Point, Point)
go (!shp, !wp) = \case
Forward n -> (shp + n *^ wp, wp )
Turn d -> (shp , rotPoint d wp)
Move r -> (shp , wp + r )
And that's it! For part1
, we want the mannhattan distance of the ship's
final position (the second item in the tuple), and for part2, we want the
manhattan distance of the ship's final position (the first item in the tuple).
mannDist :: Point -> Int
mannDist (V2 x y) = abs x + abs y
>> Day 12a
benchmarking...
time 3.218 μs (3.088 μs .. 3.351 μs)
0.985 R² (0.970 R² .. 0.992 R²)
mean 2.910 μs (2.819 μs .. 3.080 μs)
std dev 371.3 ns (282.6 ns .. 558.0 ns)
variance introduced by outliers: 92% (severely inflated)
* parsing and formatting times excluded
>> Day 12b
benchmarking...
time 7.870 μs (7.667 μs .. 8.341 μs)
0.984 R² (0.964 R² .. 1.000 R²)
mean 8.189 μs (7.772 μs .. 9.812 μs)
std dev 2.715 μs (100.2 ns .. 5.679 μs)
variance introduced by outliers: 99% (severely inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Aw man, I feel like I would have leaderboarded today had I not been busy :'( These type of number theory problems are the ones I usually do well on.
Oh well! Silly internet points, right?
For part 1, you just need to minimize a function on each bus ID:
part1 :: Int -> [Int] -> (Int, Int)
part1 t0 xs = minimumBy (comparing snd)
[ (x, waitTime)
| x <- xs
, let waitTime = x - (t0 `mod` x)
]
Part 2 is where things get interesting! Let's try to think of things inductively: start with small lists, and see how we would "add one more".
Let's say we had (offset, id)
pairs (0,7)
and (1,13)
, like in the
example. This means that we want to find times where t `mod` 7 == 0
and
(t + 1) `mod` 13 == 0
.
We can sort of do a manual search by hand to get 14
as our lowest candidate.
But also, note that 14 + (7*13)n
for any integer n
would preserve the offset
property. 14
, 14 + 91
, 14 + 182
, etc. So the family of all "valid"
numbers are 14 + (7*13)n
.
Next, what if we wanted to find the situation for pairs (0,7)
, (1,13)
, and
(4,15)
? Well, we already know that any solution that includes (0,7)
and
(1,13)
will be of the form 14 + (7*13)n
. So now we just need to find the
first one of those that also matches (4,15)
-- 'until' repeatedly applies a function until it finds a value that matches a
-- predicate
ghci> until (\t -> (t + 4) `mod` 15 == 0) (+ (7*13)) 14
1106
Ah hah, good ol' 1106
. Well, 1106
isn't the only number that works.
We can see that 1106 + (7*13*15)n
for any integer n would also work, since
it preserves that mod property.
And so, we can repeat this process over and over again for each new number we see.
- Keep track of the current "lowest match" (
14
) and the current "search step" (7*13
). - When you see a number, search that family until you find a new lowest match that includes the new number.
- Use that new number as the next lowest match, and multiply it to get the new search step.
- Rinse and repeat.
Overall, this works pretty well as a foldl
, where we keep this (lowest match, search step)
pair as an accumulator, and update it as we see each new
value in our list.
part2 :: [(Int, Int)] -> Int
part2 = fst . foldl' go (0, 1)
where
go (!base, !step) (offset, i) = (base', step * i)
where
base' = until (\n -> (n + offset) `mod` i == 0)
(+ step)
base
>> Day 13a
benchmarking...
time 189.4 ns (184.7 ns .. 198.3 ns)
0.992 R² (0.985 R² .. 1.000 R²)
mean 189.8 ns (186.2 ns .. 199.6 ns)
std dev 19.00 ns (7.817 ns .. 34.74 ns)
variance introduced by outliers: 90% (severely inflated)
* parsing and formatting times excluded
>> Day 13b
benchmarking...
time 3.868 μs (3.865 μs .. 3.872 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 3.868 μs (3.865 μs .. 3.876 μs)
std dev 14.47 ns (9.762 ns .. 24.35 ns)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
I guess today is a "here's the algorithm, now implement it" puzzle, to contrast/take a break from yesterday's "here's the goal, figure out the algorithm" :)
First, let's start with an intermediate data type representing the actions possible on each line:
data Instr =
Mask [Maybe Bool]
| Write Int Int
The mask will be a list of Maybe Bool
, where X
is Nothing
, 0
is Just False
, and 1
is Just True
. However, it's important to reverse the string
when parsing it from the input, because we want index 0
to correspond to bit
0
, index 1
to correspond to bit 1
, etc., to make our lives easier.
That's because we can implement the application of a mask (for part 1) using
ifoldl'
,
a version of foldl'
that gives you an item's index as you are folding it:
import Data.Bits (clearBit, setBit)
import Control.Lens.Indexed (ifoldl')
applyMask1 :: Int -> [Maybe Bool] -> Int
applyMask1 = ifoldl' $ \i x -> \case
Nothing -> x
Just False -> clearBit x i
Just True -> setBit x i
If the bit list contains a Nothing
in a given index, leave the item
unchanged. If it contains a Just False
, clear that index's bit (set it to
zero). If it contains a Just Nothing
, set that index's bit (set it to one).
And that leaves part 1 as a foldl through all the instructions, keeping the current map and mask as state:
import Data.IntMap (IntMap)
import qualified Data.IntMap as IM
part1 :: [Instr] -> (IntMap Int, [Maybe Bool])
part1 = foldl' go (IM.empty, [])
where
go :: (IntMap Int, [Maybe Bool]) -> Instr -> (IntMap Int, [Maybe Bool])
go (!mp, !msk) = \case
Mask msk' -> (mp, msk')
Write addr n ->
let mp' = IM.insert addr (applyMask1 n msk) mp
in (mp', msk)
Part 2's mask application is interesting, because it lives in
"non-determinancy". Basically, each bit mask bit application could potentially
yield multiple possibilities. We have to accumulate every nested possibility.
This feature is given to us by list's Monad
instance, so we can swap
ifoldl'
for
ifoldM
:
ifoldl' :: (Int -> b -> a -> b) -> b -> [a] -> b
ifoldlM :: (Int -> b -> a -> m b) -> b -> [a] -> m b
For ifoldlM
, each result lives in monad m
, so the semantics of "proceeding
along the fold" are deferred to the Monad
instance for m
. If m
is
Maybe
, it means that you only proceed if you get a Just
, or else
short-circuit with Nothing
. If m
is IO
, it means that proceeding
involves chaining the IO action's execution and binding the result to give it
to the function's next iteration. If m
is []
(list), it means that
subsequent chaining will run the function on every possibility returned by
the function's previous call, accumulating every possible way of choosing every
possible choice. (I talked about this in more depth in one of my first ever
Haskell blog
posts).
import Control.Lens.Indexed (ifoldlM)
applyMask2 :: Int -> [Maybe Bool] -> [Int]
applyMask2 = ifoldlM $ \i x -> \case
Nothing -> [clearBit x i, setBit x i]
Just False -> [x]
Just True -> [setBit x i]
For these, we return a list of every possible change from a given bit mask bit.
For the Nothing
"floating" case, there are two possibilities; for the other
two, there is only one. We trust list's Monad
instance to properly thread
over all possible results into a list of all possible changes that that Int
could have been subjected to.
And so, part 2 looks a lot like part 1!
part2 :: [Instr] -> (IntMap Int, [Maybe Bool])
part2 = foldl' go (IM.empty, [])
where
go :: (IntMap Int, [Maybe Bool]) -> Instr -> (IntMap Int, [Maybe Bool])
go (!mp, !msk) = \case
Mask msk' -> (mp, msk')
Write addr n ->
let newMp = IM.fromList ((,n) <$> applyMask2 addr msk)
in (newMp <> mp, msk)
(<>)
here is a left-biased merger, so it merges in all of the newly seen
indices into the existing ones.
>> Day 14a
benchmarking...
time 158.7 μs (158.0 μs .. 159.4 μs)
0.999 R² (0.997 R² .. 1.000 R²)
mean 157.9 μs (157.6 μs .. 158.6 μs)
std dev 1.293 μs (845.8 ns .. 2.372 μs)
* parsing and formatting times excluded
>> Day 14b
benchmarking...
time 25.76 ms (24.66 ms .. 27.04 ms)
0.990 R² (0.979 R² .. 0.998 R²)
mean 25.49 ms (25.02 ms .. 26.27 ms)
std dev 1.358 ms (982.2 μs .. 1.914 ms)
variance introduced by outliers: 20% (moderately inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
So it is yet another "here's the algorithm, implement it" days again! Only the challenge this time is...you should probably implement it to be really fast!
I don't think there is too much wiggle room in how to implement things here;
my original solution basically kept an IntMap
to the last seen time of any
value, and just repeatedly looked things up and modified the (current time,
last said) tuple.
My original solution took around 70 seconds to run, and that was what I used to submit things originally. But let's see if we can get it down to something a little less...perceptible :) This reflection can be a deep dive into writing tight, performant Haskell.
The data type we'll be using is an unboxed mutable
array.
There's a trick we can use because we have a map from integers to values, we
can just use the integer keys as the index to an array. This is usually a bad
idea but for the fact that the keys we'll be using are bounded within a
decently small range (we won't ever say a number that is greater than 30
million), so we can definitely accumulate 30 million-item array into memory
without any major problems. We'll also store our last-said times as Int32
to
be a little bit more efficient since we're trying to eek out every last bit of
perf.
So overall we still keep some state: the current time and the last said item.
Since those are just integers, we can keep that as pure in memory using
StateT
running over ST s
(the mutable state monad, where our mutable
vectors will live).
import Control.Monad.ST
import Control.Monad.State
import GHC.Int (Int32)
import qualified Data.Vector.Unboxed.Mutable as MV
data LoopState = LS
{ lsLastSaid :: !Int
, lsCurrTime :: !Int32
}
sayNext
:: MV.MVector s Int32 -- ^ the mutable vector of last-seen times
-> StateT (T2 Int32 Int) (ST s) () -- ^ an 'ST s' action with some pure (T2 Int32 Int) state
sayNext v = do
L s i <- get -- get the current pure state
lst <- MV.read v x -- our last said is x, so look up the last time we saw it
MV.write v x i -- update the last-time-seen
let j | lst == 0 = 0 -- we haven't seen it
| otherwise = i - lst -- we have seen it
put (LS (fromIntegral j) (i + 1)) -- update last seen and current time
{-# INLINE sayNext #-}
We will want to INLINE this so that it gets inlined directly into our main loop code.
Oh, let's also write a function to initialize our sequence with starting inputs:
saySomething
:: MV.MVector s Int32 -- ^ the mutable vector of last-seen times
-> Int -- ^ a number to "say"
-> StateT (T2 Int32 Int) (ST s) () -- ^ an 'ST s' action with some pure (T2 Int32 Int) state
saySomething v y = do
LS x i <- get
MV.unsafeWrite v x i -- write the last seen number with the right time
put (LS y (i + 1)) -- queue up the write of the number to say
{-# INLINE saySomething #-}
And now we're good to go to put it all together! We can use whileM_
from
Control.Monad.Loops
to emulate a while loop, where our condition is whenever lsCurrTime
reaches
the maximum value.
-- | Returns 'True' until we need to stop
stopCond :: Int32 -> StateT (T2 Int32 Int) m Bool
stopCond n = gets $ \(LS _ i) -> i < n
{-# INLINE stopCond #-}
-- gets f = f <$> get, it maps a function on top of a get
looper :: Int -> [Int] -> Int
looper n xs = runST $ flip evalStateT (LS 0 0) $ do
v <- MV.replicate n 0 -- initialize our vector with zeros
traverse_ (saySomething v) xs
whileM_ (stopCond n) (sayNext v)
gets lsLastSaid
On my machine (with some minor optimizations, like using
unsafeRead
/unsafeWrite
), this runs in 230ms for part 2...a much more
reasonable improvement over my original 70 seconds! :)
part1 :: [Int] -> Int
part1 = looper 2020
part2 :: [Int] -> Int
part2 = looper 30000000
>> Day 15a
benchmarking...
time 2.523 μs (2.390 μs .. 2.614 μs)
0.986 R² (0.986 R² .. 0.989 R²)
mean 2.320 μs (2.266 μs .. 2.409 μs)
std dev 203.6 ns (138.6 ns .. 256.9 ns)
variance introduced by outliers: 85% (severely inflated)
* parsing and formatting times excluded
>> Day 15b
benchmarking...
time 291.7 ms (281.8 ms .. 304.4 ms)
0.999 R² (0.999 R² .. 1.000 R²)
mean 302.0 ms (296.4 ms .. 312.7 ms)
std dev 9.904 ms (3.637 ms .. 13.04 ms)
variance introduced by outliers: 16% (moderately inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Today was a nice little self-contained constraint satisfaction problem! Well, it didn't have to be (apparently), but it was fun as one :)
First, our data type:
type Passport = [Int]
data Info = Info
{ iFields :: IntervalMap Int (Set Text)
, iYours :: Passport
, iTheirs :: [Passport]
}
Here we're using IntervalMap
from the data-interval package, which
makes it easy to store data at different intervals with easy lookups. For
example, if we have ["class"]
at interval (1,5)
, and we had ["row"]
at
interval (3,7)
, IntervalMap
will merge them together (with <>
, if we
choose) to get ["class"]
at (1,3)
, ["class","row"]
at (3,5)
, and
["row"]
at (5,7)
.
If we have this IntervalMap
, part 1 becomes straightforward enough with the
efficient IM.notMember
:
import qualified Data.IntervalMap.Lazy as IM
part1 :: Info -> Int
part1 info = sum
[ n
| ns <- iTheirs info
, n <- ns
, n `IM.notMember` iFields info
]
So now let's move on to the search for part 2!
Our goal is to get a list [(Int, Set Text)]
of a column number (in the
passport) with the set of all valid field names for that position. And because
we are going to be doing a search, we want this list in order of smallest to
largest valid-name sets.
First, we can replace the Int
s in each passport instead with the set of
fields they are valid for
validate :: IntervalMap Int (Set Text) -> [Int] -> Maybe [Set Text]
validate flds = traverse (`IM.lookup` flds)
validateAll :: IntervalMap Int (Set Text) -> [Passport] -> [[Set Text]]
validateAll flds = mapMaybe (validate flds)
Here (`IM.lookup` flds)
is Int -> Set Text
: it'll look up the Set Text
corresponding to the interval that the Int
falls under in the IntervalMap
.
It'll return Nothing
if any of the Int
s are invalid, and Just
if all
of the Int
s are valid.
Next we want to build our [(Int, Set Text)]
. The Set Text
is a set of what
is valid for that column number, so to get the Set Text
for 0
, for
instance, we need to S.intersection
all of the first Set Text
s in our list,;
to get the Set Text
for 1
, we need to S.intersection
all of the second
Set Text
s in our lists, etc. This can be done succinctly with a transpose
(transpose [[1,2,3],[4,5,6]] == [[1,4],[2,5],[3,6]]
). Then we can use
sortOn
to sort by the size of the valids set.
columnSets :: [[Set Text]] -> [(Int, Set Text)]
columnSets = sortOn (S.size . snd)
. zip [0..]
. map (foldl1' S.intersection)
. transpose
Now we're ready for our search! We'll be using StateT
over list, to get a
backtracking search with backtracking state (I described this technique in a
constraint solving blog
post).
Our state will be the Set Text
of all the "committed" fields so far.
search :: [(Int, Set Text)] -> Maybe [(Int, Text)]
search candidateMap = listToMaybe . flip evalStateT S.empty $ do
for candidates $ \(i, cands) -> do -- for each (Int, Set Text):
soFar <- get -- get the seen candidates
pick <- lift . toList $ cands S.\\ soFar -- pick from the Set Text not including seens
(i, pick) <$ modify (S.insert pick) -- propose this index/pick, inserting into seens
And that should be it for our search! In the end this gets the first [(Int, Text)]
that is valid, matching a column ID to the field at that column. Our
search supports backtracking through the list monad, but it should be noted
that we actually don't end up needing it for the way the puzzle input is
structured. But, because we sort our lists first from smallest to largest
valid-sets, our solution ends up being equivalent to the non-backtracking
method and backtracking is never actually triggered.
And we can wrap it all up:
part2 :: Info -> Int
part2 = product
[ iYours info !! i
| (i, fld) <- res
, "departure" `isPrefixOf` fld
]
where
cSets = columnSets $ validateAll (iFields info) (iTheirs info)
Just res = search cSets
>> Day 16a
benchmarking...
time 819.9 μs (816.9 μs .. 823.7 μs)
0.999 R² (0.999 R² .. 1.000 R²)
mean 811.0 μs (807.4 μs .. 819.9 μs)
std dev 16.97 μs (9.577 μs .. 31.05 μs)
variance introduced by outliers: 11% (moderately inflated)
* parsing and formatting times excluded
>> Day 16b
benchmarking...
time 3.517 ms (3.485 ms .. 3.580 ms)
0.998 R² (0.994 R² .. 1.000 R²)
mean 3.508 ms (3.493 ms .. 3.554 ms)
std dev 79.20 μs (23.71 μs .. 158.8 μs)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Neat, Game of Life! :D Actually, the 3D/4D twist does make a big impact for
the best method we'd pick: we run into the curse of
dimensionality. It
means that when we get to 3D and 4D, our world will become vanishingly sparse.
In my own input, only about 4% of the 3D space ended up being active, and 2% of
my 4D space ends up being active. This means that holding a dense vector of
all possible active points (which will be (6+8+6)^n
) is up to 98% wasteful.
And because of the way this process works, we have to completely copy our
entire space at every iteration.
In these times, I'm happy that Haskell has a nice immutable sparse
data structure like Set
. Sparse being beneficial in that we can easily look up and process
only the 2% of active squares, and immutable being beneficial in that each step
already requires a full copy in any case, so immutability doesn't give us any
drawback.
First a function to get all neighbors of a point, using the V3
type from the
linear library, which I've used
many times already for its convenient Num
and Applicative
instances:
import Data.Set (Set)
import qualified Data.Set as S
-- from linear
data V3 a = V3 a a a
-- its Applicative instance
pure x = V3 x x x
neighbsSet :: V3 Int -> Set (V3 Int)
neighbsSet p = S.fromList
[ p + d
| d <- sequence (pure [-1,0,1])
, d /= pure 0
]
Just as a reminder, pure [0,1]
for V3 Int
gives us V3 [0,1] [0,1] [0,1]
,
and if we sequence
that we get a cartesian N-product of all combinations [V3 0 0, V3 0 0 1, V3 0 1 0, V3 0 1 1, V3 1 0 0, .. etc.]
. We add each of those
to p
, except for the one that is V3 0 0 0
.
Now we can write our stepper, which takes a Set (V3 Int)
and returns the next
Set (V3 Int)
after applying the rules. We can do that first by making a Map (V3 Int) Int
, where Int
is the number of neighbors at a given point. This
can be done by "exploding" every V3 Int
in our set to a Map (V3 Int) Int
,
a map of all its neighbors keyed to values 1, and then using M.unionsWith (+)
to union together all of those exploded neighbors, adding any overlapping keys.
import Data.Map (Map)
import qualified Data.Map as M
neighborMap :: Set (V3 Int) -> Map (V3 Int) Int
neighborMap ps = M.unionsWith (+)
[ M.fromSet (const 1) (neighbsSet p)
| p <- S.toList ps
]
Now to implement the rules:
stepper
:: Set (V3 Int)
-> Set (V3 Int)
stepper ps = stayAlive <> comeAlive
where
neighborCounts = neighborMap ps
stayAlive = M.keysSet . M.filter (\n -> n == 2 || n == 3) $
neighborCounts `M.restrictKeys` ps
comeAlive = M.keysSet . M.filter (== 3) $
neighborCounts `M.withoutKeys` ps
stayAlive
is all of the neighborCounts
keys that correspond to already-alive
points (neighborCounts `M.restrictKeys` ps
), but filtered to the counts
that are 2 or 3. comeAlive
is all of the neighborCounts
keys that
correspond to dead points (neighborCounts `M.withoutKeys` ps
), but filtered
to only counts that are exactly 3. And our result is the set union of both of
those.
So our part 1 becomes:
part1 :: Set (V3 Int) -> Int
part1 = S.size . (!! 6) . iterate stepper
And for part 2...notice that all of our code actually never does anything
specific to V3
! In fact, if we leave the type signatures of neighbsSet
and neighborMap
and stepper
off, GHC will actually suggest more general
type signatures for us.
neighbsSet
:: (Applicative f, Num a, Ord (f a), Traversable f)
=> f a -> Set (f a)
neighborMap
:: (Applicative f, Num a, Ord (f a), Traversable f)
=> Set (f a)
-> Map (f a) Int
stepper
:: (Applicative f, Num a, Ord (f a), Traversable f)
=> Set (f a)
-> Set (f a)
Neat! This means that our code already works for any other fixed-sized
Vector
type with a Num
instance. Like, say...V4
, also from linear?
-- also from the Linear library, with all the same instances
data V4 a = V4 a a a a
part1 :: Set (V3 Int) -> Int
part1 = S.size . (!! 6) . iterate stepper
part2 :: Set (V4 Int) -> Int
part2 = S.size . (!! 6) . iterate stepper
And that's it --- code that should work for both parts :)
>> Day 17a
benchmarking...
time 1.346 ms (1.294 ms .. 1.425 ms)
0.983 R² (0.965 R² .. 0.998 R²)
mean 1.344 ms (1.316 ms .. 1.422 ms)
std dev 134.9 μs (64.10 μs .. 243.8 μs)
variance introduced by outliers: 71% (severely inflated)
* parsing and formatting times excluded
>> Day 17b
benchmarking...
time 1.982 ms (1.914 ms .. 2.100 ms)
0.989 R² (0.980 R² .. 0.999 R²)
mean 1.943 ms (1.921 ms .. 1.995 ms)
std dev 122.6 μs (72.82 μs .. 220.0 μs)
variance introduced by outliers: 47% (moderately inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Let's parse with parser combinators!
The main way I have learned how to deal with these binary-operation parsers is to separate out the stages into a "bottom" level containing only the leaves (here, the int literals) and parentheses, and then build up layers of precedence one-by-one from highest to lowest. For the first part we only have two layers, then, since we only have one level of precedence.
{-# LANGUAGE OverloadedStrings #-}
import qualified Text.Megaparsec as P
import qualified Text.Megaparsec.Char as P
import qualified Text.Megaparsec.Char.Lexer as PP
type Parser = P.Parsec Void String
parseBottom1 :: Parser Int
parseBottom1 = P.choice
[ PP.decimal
, P.between "(" ")" parseTop1 -- use -XOverloadedStrings to get parsers that match strings
]
parseTop1 :: Parser Int
parseTop1 = do
leftOfOp <- parseBottom1 -- parse the left hand side of a possible binary operator
doNext acc
where
doNext acc = P.choice -- once we parse a left hand side, pick from:
[ do " * " -- either it's a *
rightOfOp <- parseBottom1 -- ... so we parse the right hand side and multiply
doNext (acc * rightOfOp)
, do " + " -- or it's a +
rightOfOp <- parseBottom1 -- ... so we parse the right hand side and add
doNext (acc + rightOfOp)
, pure acc -- otherwise that was it, no operator
]
Remember that leftOfOp
could either come from a leaf literal number or from a
parenthesized equation. In the end, we get an Int
, representing whatever
number was on the left hand side of our operator. Then we move into doNext
,
which continually accumulates new operations after that first leftOfOp
parse.
If we see a *
, we parse the right hand side, fold that into our accumulator
and repeat until we hit a dead end and yield our accumulated value; same for
+
.
So there's this sort of "cycle" that parseTop
defers to parseBottom
for its
underlying things "in between" the operators, but parseBottom
loops back up
to parseTop
to handle what is in the parentheses.
part1 :: String -> Maybe Int
part1 = P.parseMaybe $
sum <$> P.many parseTop1
The twist for part 2 is that now we have to have another layer of precedence, so we split things out:
parseBottom2 :: Parser Int
parseBottom2 = P.choice
[ PP.decimal
, P.between "(" ")" parseTop2
]
parseMiddle2 :: Parser Int
parseMiddle2 = do
leftOfOp <- parseBottom2
doNext leftOfOp
where
doNext acc = P.choice
[ do " + "
rightOfOp <- parseBottom2
doNext (acc + rightOfOp)
, pure acc
]
parseTop2 :: Parser Int
parseTop2 = do
leftOfOp <- parseMiddle2
doNext leftOfOp
where
doNext acc = P.choice
[ do " * "
rightOfOp <- parseMiddle2
doNext (acc * rightOfOp)
, pure acc
]
So the parser dependency again is kind of interesting: parseTop2
is built up
of chained parseMiddle2
s, which is built up of chained parseBottom2
, which
could loop back up with parseTop2
if detect parentheses.
part2 :: String -> Maybe Int
part2 = P.parseMaybe $
sum <$> (parseTop2 `P.sepBy` P.newline)
Note that this chaining and looping behavior can be abstracted out --- that's essentially what I wrote in my cleaned up solution. But also the Control.Monad.Combinators.Expr module also abstracts over this pattern, letting you specify the "layers" you want, and it'll generate the right parser for you with the correct weaving of dependencies like I described here. But still, I think it's fun to see how these things end up looking like under the hood :)
>> Day 18a
benchmarking...
time 2.824 ms (2.691 ms .. 3.014 ms)
0.975 R² (0.952 R² .. 0.998 R²)
mean 2.748 ms (2.703 ms .. 2.844 ms)
std dev 208.7 μs (100.8 μs .. 383.4 μs)
variance introduced by outliers: 53% (severely inflated)
* parsing and formatting times excluded
>> Day 18b
benchmarking...
time 2.270 ms (2.143 ms .. 2.447 ms)
0.974 R² (0.958 R² .. 0.996 R²)
mean 2.231 ms (2.180 ms .. 2.378 ms)
std dev 236.7 μs (129.2 μs .. 403.0 μs)
variance introduced by outliers: 70% (severely inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
I had originally solved this puzzle using recursive knot tying and a funky custom Monad --- the writeup for that is available online here. But after some thought and reflection, I saw that things might be a little cleaner as a hylomorphism from recursion-schemes, so I did a rewrite based on it! It also ended up being about 25% faster to run, which was a nice bonus. Note that I do have a blog post on hylomorphisms and recurion schemes (https://blog.jle.im/entry/tries-with-recursion-schemes.html), if you'd like to investigate more about the topic :)
The central type ("base functor") is Rule
:
data Rule a = Simple Char
| Compound [[a]]
deriving (Show, Eq, Ord, Generic, Functor)
A Rule a
is either a "base" Char
match, or it is a list of options of sequences
(a list of "or"'s of "and then"'s) of a
. The choice of a
gives us
our interesting behavior.
For example, our initial ruleset from the input file is a list of Rule Int
s:
either they are a simple Char
, or they contain a list of options of sequences
of rule id's (Int
). We can load it all as an IntMap (Rule Int)
, where each
Rule Int
is stored under its rule ID.
Just to help us get an intuition for this type, let's look at what happens if we want to "expand" out a rule all the way to only leaves at the end of a bunch of nested choices and sequences. This isn't required for the solve, but could be pretty fun.
For that, we can use the Fix
data type:
newtype Fix f = Fix (f (Fix f))
type ExpandedRule = Fix Rule
A Fix Rule
is infinite nested Rule
s: it's essentially Rule (Rule (Rule (Rule ...)))
forever, meaning underneath each Compound
are new rules, and at
the end of it all we only have Leaf Char
s, and no more Int
s. For example,
we could represent rule 0 of
0: 1 2 | 3
1: 3
2: 3 3
3: "a"
as
Fix $ Compound [
[Fix $ Compoud [[Fix (Leaf 'a')]], Fix $ Compound [[Fix (Leaf 'a'), Fix (Leaf 'a')]]]
, [Fix (Leaf 'a')]
]
But, given an IntMap (Rule Int)
(the "unexpanded" raw rules as they are in
the input file), how do we get our Fix Rule
?
We can use the handy ana
function, which, given an expansion function a -> Rule a
, returns a a -> Fix Rule
: It runs the a -> Rule a
expansion
function on the "seed" a
, and then runs it again on all the a
s in the
result, and again, and again, etc., until there are no more a
s to expand.
Well, in our case, our "expansion" function is Int -> Rule Int
: "To expand an
Int
, look it up in the IntMap Int (RuleInt)
". And that gives us a function
to fully expand any rule number:
expandRule :: IntMap (Rule Int) -> Int -> Fix Rule
expandRule rs = ana (rs IM.!)
Neat, huh? That will fully expand the rule at any index by repeatedly
re-expanding it with (rs IM.!)
until we are out of things to expand.
Another fun thing we can write that we could actually use for part 1 is to turn
an Fix Rule
into a list of all possible strings to match. We want to
write a Fix Rule -> [String]
function by tearing down our recursive data
type, and this could be nicely expressed with a catamorphism (cata :: (Rule a -> a) -> Fix Rule -> a
), where we specify how to tear down a "single layer" of
our Rule
type, and cata
will generalize that to tear down the entire
structure. I talk about this a bit in my recursion schemes blog
post, and the
explanation I give is "The a
values in the Rule
become the very things we
swore to create." --- in this case, the [String]
So let's write our Rule [String] -> [String]
:
generateAlg :: Rule [String] -> [String]
generateAlg = \case
Simple c -> [[c]] -- the single single-char string is created
Compoud xs -> concatMap (fmap concat . sequence) xs -- concat/sequence all options
And now cata generateAlg
will generate all possible matches from a ruleset
ghci> cata generateAlg
(Fix $ Compound [[Fix (Leaf 'h'), Fix (Leaf 'e')], [Fix (Leaf 'h')], [Fix (Leaf 'q')]])
["he","h","q"]
Okay, that's enough playing around for now...time to find our real solution :)
Note that we can "interpret" a rule to match it on a string by turning it into
a String -> [String]
: it'll take a string and return a list of the leftovers
of every possible match. For example, running the rules (he)|h|q
on "hello"
should give us ["llo","ello"]
. Then we can just see if we have any matches
that return empty leftovers.
For aid in thinking, let's imagine turning a Fix Rule
into a String -> [String]
. We can do that with the help of cata :: (Rule a -> a) -> Fix Rule -> a
. Because we want to write a Fix Rule -> (String -> [String])
, our
catamorphism function ("algebra") is Rule (String -> [String]) -> (String -> [String])
:
matchAlg :: Rule (String -> [String]) -> String -> [String]
matchAlg = \case
Simple c -> \case
[] -> []
d:ds -> if c == d then [ds] else []
Compound xs -> \str ->
concatMap (sequenceAll str) xs
where
-- run the String -> [String]s on an input String one after the other
sequenceAll :: String -> [String -> [String]] -> [String]
sequenceAll s0 fs = foldr (>=>) pure fs s0
match :: Fix Rule -> String -> [String]
match = cata matchAlg
We want to fail on our input string (return no matches) if we see a Simple c
with either an empty input string or one that doesn't match the c
. Then for
the Compound
case with our xs :: [[String -> [String]]]
, we take a choice
(concatMap
) of all of the possible full sequences of the inner [String -> [String]]
sequences.
ghci> match (Fix $ Compound [[Fix (Leaf 'h'), Fix (Leaf 'e')], [Fix (Leaf 'h')], [Fix (Leaf 'q')]])
"hello"
["llo", "ello"]
Alright, so now how do we solve the final puzzle?
It looks like we need to "generate" a Fix Rule
, and immediately tear it down
into a String -> [String]
to use it to match a string. "Generate recursively
and immediately tear down recursively"...that's a hylomorphism!
hylo :: Functor f => (f b -> b) -> (a -> f a) -> a -> b
-- which we use as...
hylo :: (Rule b -> b) -> (a -> Rule a) -> a -> b
-- which we use as...
hylo :: (Rule (String -> [String]) -> (String -> [String]))
-> (Int -> Rule Int)
-> Int
-> (String -> [String])
If we give hylo
a way to "break down nested Rule
s" and a way to "build up
nested Rule
s", then it can actually iteratively expand up Rule
s while
immediately tearing them down. The nice thing about this is that it's
very lazy: it'll only call the generator function if you ever need the
thing during your teardown function. Since our teardown function (the String -> [String]
) will terminate whenever we encounter an empty string or no
matches, hylo
will only run the build-up function until the point that we hit
one of those conditions. You can also think of it as running it on a Rule Int
where each Int
is dynamically looked up as you need it from the rules map.
The neat thing about this is that we don't ever need Fix
at all: it's all
built up and torn down "in-place", and we never built up any intermediate
value. That's why I mentioned that the Fix
part earlier was more of a
side-tangent! But it definitely helps us understand the big picture, I feel.
Our final code (the whole of it, minus the parser) ends up being:
data Rule a = Simple Char
| Compound [[a]]
deriving (Show, Eq, Ord, Generic, Functor)
matchAlg :: Rule (String -> [String]) -> String -> [String]
matchAlg = \case
Simple c -> \case
[] -> []
d:ds -> if c == d then [ds] else []
Compound xs -> \str ->
concatMap (sequenceAll str) xs
where
sequenceAll s0 fs = foldr (>=>) pure fs s0
matcher :: IntMap (Rule Int) -> String -> [String]
matcher rules = hylo matchAlg (rules IM.!) 0
solver :: IntMap (Rule Int) -> [String] -> Int
solver rules = length . filter (any null . matcher rules)
part1 :: IntMap Rule -> [String] -> Int
part1 = solver
part2 :: IntMap Rule -> [String] -> Int
part2 rs = solver (extraRules <> rs)
extraRules :: IntMap (Rule Int)
extraRules = IM.fromList [
(8 , Compound [[42],[42,8]])
, (11, Compound [[42,31],[42,11,31]])
]
As a nice little bonus, we can also use generateAlg
with a hylomorphism to
also turn an IntMap (Rule Int)
into a list of all possible strings, which
works for part 1 but would return an infinite list for part 2.
generateAll :: IntMap (Rule Int) -> Int -> [String]
generateAll rules = hylo generateAlg (rules IM.!) 0
>> Day 19a
benchmarking...
time 4.273 ms (4.202 ms .. 4.507 ms)
0.990 R² (0.965 R² .. 1.000 R²)
mean 4.244 ms (4.200 ms .. 4.390 ms)
std dev 220.8 μs (54.67 μs .. 480.5 μs)
variance introduced by outliers: 30% (moderately inflated)
* parsing and formatting times excluded
>> Day 19b
benchmarking...
time 27.13 ms (26.34 ms .. 28.22 ms)
0.994 R² (0.987 R² .. 1.000 R²)
mean 26.20 ms (25.94 ms .. 26.80 ms)
std dev 908.6 μs (525.5 μs .. 1.450 ms)
variance introduced by outliers: 10% (moderately inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Ah, the infamous Day 20 :) I actually went through a few different possible solutions for this before settling on the one I have now. It also pushed me to flesh out my "direction manipulation" mini-library (that I used Day 12) to be a full "orientation manipulation" mini-library. With it, I get to enumerate, manipulate, and combine the eight possible orientations of a 2d square grid in a nice way.
data Dir = North | East | South | West
-- | Rotate a point by a direction
rotPoint :: Num a => Dir -> V2 a -> V2 a
allDir :: [Dir]
allDir = [North ..]
-- All of these instances are described in my day 12 writeup
instance Semigroup Dir where
instance Monoid Dir where
instance Group Dir where
instance Abelian Dir
-- | A possible orientation (flip and rotate) of a 2d square grid
data D8 = D8 { d8Rot :: Dir, d8Flip :: Bool }
instance Semigroup D8 where
D8 x1 False <> D8 x2 y2 = D8 (x1 <> x2) y2
D8 x1 True <> D8 x2 y2 = D8 (x1 <> invert x2) (not y2)
instance Monoid D8 where
mempty = D8 North False
instance Group D8 where
invert (D8 x False) = D8 (invert x) False
invert (D8 x True ) = D8 x True
allD8 :: [D8]
allD8 = D8 <$> allDir <*> [False, True]
-- | Rotate and flip a point by a 'D8'
orientPoint :: Num a => D8 -> V2 a -> V2 a
orientPoint = \case
D8 North False -> id
D8 East False -> \(V2 x y) -> V2 y (-x)
D8 West False -> \(V2 x y) -> V2 (-y) x
D8 South False -> \(V2 x y) -> V2 (-x) (-y)
D8 North True -> \(V2 x y) -> V2 (-x) y
D8 East True -> \(V2 x y) -> V2 y x
D8 West True -> \(V2 x y) -> V2 (-y) (-x)
D8 South True -> \(V2 x y) -> V2 x (-y)
Having orientations as a data type I can manipulate as first-class values helped me "think" my way through everything a little easier.
First things first, we can break apart a 10x10 tile into the parts that
actually matter: its eight edges (which we can represent as a set of Finite 10
s) and its core (which we can represent as a set of V2 (Finite 8)
, 8x8
points). I'm using Finite
from
finite-typelits mostly
as a way for me to keep track of what I have at each stage --- remember that
Finite 8
, for instance, is one of 0,1,2,3,4,5,6, or 7. This is also handy
because the library gives us strengthen <=< unshift :: Finite 10 -> Maybe (Finite 8)
, that lets us "chop off" the outer edges of a Set (Finite 10)
to
get the Set (Finite 8)
core.
type Edge = Set (Finite 10)
type Core = Set (V2 (Finite 8))
-- | Shift corner to (0,0)
shiftToZero :: (Applicative f, Num a, Ord a) => Set (V2 a) -> Set (V2 a)
-- | mapMaybe but for sets
mapMaybeSet :: Ord b => (a -> Maybe b) -> Set a -> Set b
toTiles
:: Set (V2 (Finite 10))
-> ((Core, D8 -> Edge), Map Edge D8)
toTiles ps = ((core, getEdge), M.toList (map swap oToEdge))
where
core = mapMaybeSet (traverse (strengthen <=< unshift)) ps
getEdge o = oMap M.! o
oMap = M.fromList oToEdge
oToEdge =
[ (o, mapMaybeSet (\(V2 x y) -> x <$ guard (y == 0)) ps')
| o <- allD8
, let ps' = shiftToZero $ orientPoint (invert o) `S.map` ps
]
Both "orientation to edge at that orientation" (D8 -> Edge
) and "edge to the
orientation that that edge exists at" (Map Edge D8
) are useful things to
have, so we can generate them both here.
Once we do this we can get three separate IntMap
s after parsing the file:
IntMap Core -- a map of tile id's to their cores (for drawing)
IntMap (D8 -> Edge) -- a map of tile id's to their edges at each orientation
IntMap (Map Edge D8) -- a map of tile id's to all of their edges and the orientations they are at
Now for the actual solve --- we're going to build up a Map Point (Int, D8)
one at a time, where the point (V2 Int
) is going to contain the tile id at
that point, as well as the orientation that tile has to be at.
To do that, we're going to use a queue of "open edges": the location that the
open edge is facing, and the direction (north/south/east/west) of that open
edge -- a Map Edge (Point, Dir)
. We'll also keep a set of tile id's that
have not been placed yet. And then at each step:
- Pop an edge off of that queue --
(Edge, (Point, Dir))
- Search to see if any non-used tiles have any matching edge
a. If there is not any, it means that that edge is at the edge of the
overall map, so just skip.
b. If there is a tile, place that tile at the indicated
(Point, Dir)
and place all of its edges into the queue. - Repeat until the queue is empty.
-- | A placement is a Tile ID and the orientation of that tile
type Placement = (Int, D8)
type Point = V2 Int
assembleMap
:: IntMap (D8 -> Edge) -- ^ tile id to the edge at each orientation
-> IntMap (Map Edge Placement) -- ^ tile id to the map of edges to what tile id, orientation that edge is at
-> Map Point Placement -- ^ map of points to the tile id, orientation at each point
assembleMap tileMap tiles0 =
go (toQueue 0 mempty t0id allDir)
(IM.keysSet tiles1)
(M.singleton 0 (t0id, mempty))
where
-- populate the initial tile and the initial queue
((_ , t0Map), tiles1) = IM.deleteFindMin tiles0
((_, (t0id, _)), _ ) = M.deleteFindMin t0Map
-- a cache of edges to tiles ID's (and orientations) that have that edge.
tileCache :: Map Edge [Placement]
tileCache = M.fromListWith (++)
[ (edge, [placement])
| (_ , tileEdges) <- IM.toList tiles0
, (edge, placement) <- M.toList tileEdges
]
go :: Map Edge (Point, Dir) -- ^ queue: edge -> place, orientation
-> IntSet -- ^ leftover points
-> Map Point Placement -- ^ current map
-> Map Point Placement -- ^ sweet tail rescursion
go queue tiles mp = case M.minViewWithKey queue of
Nothing -> mp
Just ((edge, (pos, d)), queue') ->
case find ((`IS.member` tiles) . fst) (tileCache NEM.! edge) of
Nothing -> go queue' tiles mp
Just (tileId, o) ->
-- If we're adding a North edge, then it's the new tile's South
-- edge; if we are adding a East edge, it's the new tile's West
-- edge, etc; (d <> South) is the right relationship to properly
-- flip
let o' = o <> D8 (d <> South) True
newQueue = toQueue pos o'
tileId
(filter (/= d <> South) allDir)
in go (newQueue <> queue)
(IS.delete tileId tiles)
(M.insert pos (tileId, invert o') mp)
-- | For a given image, add the given edges into the queue
toQueue
:: Foldable f
=> Point -- ^ location of corner
-> D8 -- ^ orientation to insert
-> Int -- ^ tile id
-> f Dir -- ^ edges to insert
-> Map Edge (Point, Dir)
toQueue p0 o tileId ds = M.fromList $ ds <&> \d -> -- for each dir
( (tileMap IM.! tileId) (o <> D8 d False) -- the edge
, ( p0 + rotPoint d (V2 0 (-1)) -- the new point
, d
)
)
We can wrap this all up in a solver to extract the Map Point Placement
(using
assembleMap
) and the Set Point
--- the "actual" pixel map that represents
all of the points themselves in 2d space.
solve
:: IntMap (Set (V2 (Finite 10)))
-> (Map Point Placement, Set Point)
solve ts = (shiftToZero mp, blitted)
where
info = toTiles <$> ts
edgeMap = IM.mapWithKey (\i (_, e) -> (i,) <$> e) info
edges = snd . fst <$> info
mp = assembleMap edges edgeMap
blitted = flip M.foldMapWithKey mp $ \p (tileId, o) ->
let core = fst . fst $ info IM.! tileId
in S.map ((+ (p * 8)) . shiftToZero . orientPoint o) core
We can use the Map Point Placement
to get the answer to part 1: just look at
the tile id's at the corners of the map. Since we shiftToZero
, we can just
look up mp M.! V2 0 0
, mp M.! V2 0 12
, mp M.! V2 12 0
, and mp M.! V2 12 12
, and multiply them all together.
For part 2, after we assemble the actual Point
, we can do a search for all
dragons at all orientations.
-- | given a pattern and a map of points, poke out all points matching that
-- pattern.
pokePattern
:: Set Point -- ^ pattern
-> Set Point -- ^ map
-> Set Point
pokePattern pat ps0 = foldl' go ps0 (range (V2 0 0, V2 96 96))
where
go ps d
| pat' `S.isSubsetOf` ps = ps S.\\ pat'
| otherwise = ps
where
pat' = S.mapMonotonic (+ d) pat
And now we try pokePattern
with the dragon at all orientations until we find
one that gets any pokes:
dragon :: Set Point -- the dragon image
allDragons :: [Set Point] -- the dragon image at all orientations
allDragons =
[ shiftToZero $ orientPoint o `S.map` dragon
| o <- allD8
]
dragonCount
:: Set Point
-> Maybe Int
dragonCount fullMap = listToMaybe
[ res
| drgn <- allDragons
, let res = S.size $ pokePattern drgn fullMap
, res /= S.size fullMap
]
And that concludes my solve of what was probably the most complex challenge of the month! Overall a lot of moving parts, but I was at least very happy to be able to use some knowledge of group theory (in particular, how the orientations of a square compose and interact) to break the puzzle down into pieces that were much easier to think about.
>> Day 20a
benchmarking...
time 29.10 ms (28.84 ms .. 29.92 ms)
0.997 R² (0.990 R² .. 1.000 R²)
mean 29.03 ms (28.81 ms .. 29.63 ms)
std dev 762.3 μs (159.7 μs .. 1.370 ms)
* parsing and formatting times excluded
>> Day 20b
benchmarking...
time 73.35 ms (66.76 ms .. 90.08 ms)
0.931 R² (0.829 R² .. 1.000 R²)
mean 69.27 ms (66.81 ms .. 78.84 ms)
std dev 7.768 ms (154.8 μs .. 13.53 ms)
variance introduced by outliers: 35% (moderately inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Another nice self-contained constraint satisfaction problem, along the lines of Day 16 :) Actually, after solving this one, I went back and rewrote my day 16 solution in terms of a common solver function that works for both!
-- | Given a map of @k@ to possible @a@s for that @k@, find possible
-- configurations where each @k@ is given its own unique @a@.
pickUnique :: (Ord k, Ord a) => [(k, Set a)] -> [Map k a]
pickUnique mp = flip evalStateT S.empty $ do
fmap M.fromList . for opts . traverse $ \poss -> do
seen <- get
pick <- lift $ S.toList (poss `S.difference` seen)
pick <$ modify (S.insert pick)
where
opts = sortOn (S.size . snd) mp
It uses StateT
over list, like I described in a constraint solving blog
post.
Basically it explores all of the possibilities of drawing from a state of
"items left-over to assign". The state is a Set a
of items not yet picked,
and at every step we non-deterministically pick
an a
out of the given (k, Set a)
of options that hasn't already been chosen. We use that pick and
add that picked item to the picked item set along that branch.
We also sort by the size of the possibility set for each k
, because starting
with smaller possibilities keeps our tree tight at the top, instead of wide ---
we can eliminate options much more quickly.
Now all we need to do is to get our information into a [(k, Set a)]
. In our
case, this is [(String, Set String)]
-- with each allergen, associate a set
of possible foods they might be associated with.
We can do this by just taking an intersection of all the possibilities on each line:
assembleOptions
:: (Ord k, Ord a)
=> [(Set a, Set k)] -- set of foods, set of allergens
-> Map k (Set a) -- each allergen with the foods they were seen with in all occurrences
assembleOptions info = M.unionsWith S.intersection $
[ M.fromSet (const igr) alg -- a map of allergens to all foods they were seen with in this item
| (igr, alg) <- info
]
We generate a list of allergens to all foods they were seen with on each item,
and then intersect
all of those foods within an allergen, so that our final
Map k (Set a)
matches each k
allergen with a set ofall foods that were
present in all of the occurrences of each allergen.
Now part 2 is basically just reading off the results of pickUnique
part2 :: [(Set String, Set String)] -> Maybe [String]
part2 = fmap M.elems . listToMaybe . pickUnique . assembleOptions
We definitely have a nice advantage here in that the Map String String
(the
result map of allergens to foods) already is sorted in order of allergens
(alphabetically), so no need to do anything other than just M.elems
:)
Part 1 is definitely slightly more complicated: not only do we need to find the allergenic foods, we have to count the occurrences of non-allergenic foods in all the items:
part2 :: [(Set String, Set String)] -> Maybe Int
part2 info = do
allergenicFoods <- fmap (S.fromList . M.elems)
. listToMaybe
. pickUnique
. assembleOptions
$ info
pure . sum $
[ length $ filter (`S.notMember` allergenicFoods) foods
| (foods, _) <- info
]
where
allFoodOccurrences :: [String]
allFoodOccurrences = concatMap (S.toList . fst) info
>> Day 21a
benchmarking...
time 270.6 μs (267.0 μs .. 277.0 μs)
0.997 R² (0.994 R² .. 0.999 R²)
mean 273.1 μs (269.2 μs .. 283.4 μs)
std dev 22.37 μs (8.162 μs .. 40.92 μs)
variance introduced by outliers: 71% (severely inflated)
* parsing and formatting times excluded
>> Day 21b
benchmarking...
time 162.9 μs (160.4 μs .. 165.9 μs)
0.997 R² (0.994 R² .. 1.000 R²)
mean 160.2 μs (158.4 μs .. 165.3 μs)
std dev 9.685 μs (3.385 μs .. 17.84 μs)
variance introduced by outliers: 59% (severely inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
This one can be a fun exercise in explicit/direct tail recursion :) It's a straightforward implementation of an "imperative" algorithm, but we actually gain a lot from implementing our imperative algorithm in a purely functional setting, and can write something that runs faster than we might write in a language with implicit mutation. Immutability can be an optimization, since our data structures are designed around sharing and avoiding deep clones, so storing references and caches to old values are extremely cheap. I explain more about this at the end, but it's nice that we can get the advantages of imperative programming without most of the drawbacks of implicit mutation slowing down our code.
This problem is also a nice showcase of Haskell's standard "queue" data type,
Seq
from
Data.Sequence,
with O(1) pushing and popping from both ends.
I decided to write a function that I could use to parameterize on for both parts.
import Data.Sequence (Seq(..))
import Data.Sequence.NonEmpty (NESeq(..))
import qualified Data.Sequence as Seq
type Deck = Seq Int
type NEDeck = NESeq Int
data Player = P1 | P2
playGameWith
:: (NEDeck -> NEDeck -> Maybe Player) -- ^ handler
-> Deck -- ^ p1 starting deck
-> Deck -- ^ p2 starting deck
-> (Player, Deck) -- ^ winner and deck
The handler function will let us specify how to handle the situation when both
decks are non-empty (represented by
Data.Sequence.NonEmpty).
If returns Nothing
, we defer to the
higher-card-wins War rules,
and if it returns Just
, we take that Player
as the winner of that round.
For part 1, we always defer to the higher-card-wins rule, so we can ignore our
decks and return Nothing
.
game1 :: Deck -> Deck -> (Player, Deck)
game1 = playGameWith $ \_ _ -> Nothing
For part 2, we want to play a game with the tops of the decks given to us, but only if we have enough cards.
game2 :: Deck -> Deck -> (Player, Deck)
game2 = playGameWith $ \(x :<|| xs) (y :<|| ys) -> do
xs' <- takeExactly x xs
ys' <- takeExactly y ys
pure $ fst (game2 xs' ys')
takeExactly :: Int -> Seq a -> Maybe (Seq a)
takeExactly n xs = Seq.take n xs <$ guard (Seq.length xs >= n)
If we don't have enough items to take exactly x
items from xs
, then we fail
and defer to higher-card-wins rules (and same for y
and ys
). Otherwise, we
play a game2
with the exactly-sized deck tops to determine the winner. The
way the recursion is structured here is pretty night because there is a loop
between the two function pointers (game2
, and the lambda passed to it), so we
can go back and forth between them without allocating new functions.
Now the only thing left is to actually write playGameWith
:D This one is not
too bad if we use a helper function to make sure things stay tail-recursive so
we don't accidentally leak space. We also would like to make sure we keep the
same top-level f
in the closure for the whole time, so that the recursive
call in go
to go
will go exactly back to its own function pointer.
import Data.Set (Set)
import qualified Data.Set as S
playGameWith
:: (NEDeck -> NEDeck -> Maybe Player) -- ^ handler
-> Deck -- ^ p1 starting deck
-> Deck -- ^ p2 starting deck
-> (Player, Deck) -- ^ winner and deck
playGameWith f = go S.empty
where
go :: Set (Deck, Deck) -> Deck -> Deck -> (Player, Deck)
go !seen !xs0 !ys0
| (xs0, ys0) `S.member` seen = (P1, xs0)
| otherwise = case (xs0, ys0) of
(x :<| xs, y :<| ys) ->
let winner = case f (x :<|| xs) (y :<|| ys) of
Nothing -> if x > y then P1 else P2
Just p -> p
in case winner of
P1 -> go seen' (xs :|> x :|> y) ys
P2 -> go seen' xs (ys :|> y :|> x)
(Empty, _ ) -> (P2, ys0)
(_ , Empty) -> (P1, xs0)
where
seen' = S.insert (xs0, ys0) seen
Most of this implementation follows the logic straightforwardly, remembering to
use f
to give the callback a chance to "intercept" the "highest card win"
rule if it wants to. We get a lot of mileage here out of the :<|
, :|>
and
Empty
constructors for Seq
, which allows us to match on the head and tail
or an empty Seq
as a pattern. Note that this isn't perfectly
tail-recursive -- we do get another layer of data allocated whenever we
recurse into a game. But at least it's tail-recursive within the same game.
Note that this talk about tail recursion isn't because we are afraid of overflowing the call stack like in other languages (and trying to take advantage of tail-call optimization) --- the value in tail recursion is that we can stay constant-space on the heap (since haskell function calls go on the heap, not a call stack).
This works, but we can make it a little faster in a way that only purely
functional languages can benefit from. Checking for seen decks in a Set (Deck, Deck)
can be pretty expensive in such a tight loop, and it's definitely
the bottleneck of our loop. One quick optimization we can do is use an
IntSet
instead of a Set
, and store a "hash" (really, partition index) of
our data:
hashHand ;: Deck -> Deck -> Int
hashHand xs ys = hash (take 2 (toList xs), take 2 (toList ys), length xs)
So instead of checking if a hand pair has been seen before, we can only check
hashHand xs0 ys0 `IS.member` seen
, and IS.insert (hashHand xs0 ys0) seen
at every step. This becomes very efficient (takes my time from 1.8s down to
8ms), effectively eliminating the main bottleneck.
However, this method is mathematically unsound because it's possible for two
different decks to "hash" to the same Int
. It didn't happen in my own input,
but it happened when solving the game for one of my friend's inputs.
Instead what we can do is implement "hash set", with easy negative checks, and
expensive positive checks --- but those should only happen basically once per
game, and not once per round. We can store a IntMap (Set (Deck, Deck))
:
go :: IntMap (Set (Deck, Deck)) -> Deck -> Deck -> (Player, Deck)
go !seen !xs0 !ys0
| collision = (P1, xs0)
| otherwise = case (xs0, ys0) of
(x :<| xs, y :<| ys) ->
let winner = case f (x :<|| xs) (y :<|| ys) of
Nothing -> if x > y then P1 else P2
Just p -> p
in case winner of
P1 -> go seen' (xs :|> x :|> y) ys
P2 -> go seen' xs (ys :|> y :|> x)
(Empty, _ ) -> (P2, ys0)
(_ , Empty) -> (P1, xs0)
where
collision = case IM.lookup (hashHand xs0 ys0) seen of
Nothing -> False
Just s -> (xs0, ys0) `S.member` s
seen' = IM.insertWith (<>) (hashHand xs0 ys0) (S.singleton (xs0, ys0)) seen
Note storing the (Deck, Deck)
in our IntMap
is very expensive if we are
using in-place mutation for our decks: we'd have to do a full copy of our
decks every round to store them into our set, because mutating them will
change them. In the purely functional case, we don't have to do anything
special because no values are ever mutated --- the reference to our old data is
already there!
In addition, inserting/popping values off of a Seq
does not require a full
copy: because Seq
is internally a finger
tree (a purely functional
persistent data structure optimized for these operations), adding a new value
does not require a full copy, but instead allocates very little because most of
your "new" tree's internal nodes are pointing at references to the original
tree. So no copying is ever made, and storing these Seq
s in our IntMap
is
essentially just storing a pointer.
This is one of the nice ways which immutability can give us performance increases! These are always fun to highlight because there's some common fantasy that immutability = slower, when in reality it's often an optimization.
>> Day 22a
benchmarking...
time 230.6 μs (228.1 μs .. 236.1 μs)
0.998 R² (0.993 R² .. 1.000 R²)
mean 228.9 μs (227.4 μs .. 233.8 μs)
std dev 7.584 μs (798.7 ns .. 15.31 μs)
variance introduced by outliers: 29% (moderately inflated)
* parsing and formatting times excluded
>> Day 22b
benchmarking...
time 7.770 ms (7.469 ms .. 8.183 ms)
0.988 R² (0.976 R² .. 0.999 R²)
mean 7.805 ms (7.666 ms .. 7.963 ms)
std dev 398.9 μs (262.8 μs .. 568.5 μs)
variance introduced by outliers: 25% (moderately inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Day 23 -- this one definitely stumped me a while, and it was the first one to take me more than 24 hours!
Part 1 was straightforward enough with the circular Pointed List, and was pretty fun indeed. But the main problem with extrapolating this to part 2 was the crucial "slow part": finding the index of the "preceding" cup. Using a circular pointed list makes it very fast to do things like take 3 cups and insert 3 cups where you want them, but the tough thing is finding where you want to re-insert them: if you pick up cup #3, then where is cup #2? My circular pointed list (and my later mutable vector based one, among other attempts) all suffered from that same problem: re-arranging the cups is fast, but I couldn't figure out a way to know where to place them without doing a full linear search. And this was tractable for 10 cups, but pretty much impossible for 1 million cups -- especially since the location of the 'preceding cup' soon became very far from the current cup (it goes to the full 500k pretty quickly!)
In frustration, I implemented a mutable circularly linked list library...but found the same problem: I could easily take and insert, but no easy way to find out where the preceding cup was without doing an item-by-item traversal.
The breakthrough finally came when I thought about attaching a pointer to the preceding cup's cell to each linked list cell --- a "backdoor" pointer that skips across the circularly linked list. This should be doable because the structure of "preceding cup" is fixed -- it won't ever change, and so this pointer should also be fixed as well as you shuffle everything over it. I had the visual imagery of "pulling" the three taken cups up back "through" the backdoor pointer, and everything seemed very efficient, since the main inefficiency (finding the preceding cup) was fixed.
Unfortunately I am not skilled enough in pointer manipulation and other imperative programming intricacies to be able to implement this in a nice way. So I stepped back and thought about just "reifying" this pointer structure into an array of indices (pointers), where the addresses were indices.
Each cell would have to contain:
- The index of the cup to the right
- The index of the preceding cup
Only...#2 doesn't need to actually be a part of the cell, because it's fixed and never mutates. So we only need to have each cell hold #1, and use some sort of scheme to get #2.
And then that's when it hit me --- if I simply stored Cup #1 at index 0, Cup #2 at index 1, Cup #3 at index 2, etc...then #2 is simply "the previous index"! So in the end we only need an array of indices, where each index corresponds to that cup. The "preceding-cup" structure is fixed, and we only need to update the "cup to the right" pointers!
import Data.Finite
import qualified Data.Vector.Mutable.Sized as MV
import qualified Data.Vector.Sized as V
type CrabState n s = MV.MVector n s (Finite n)
Our data structure will be a million-sized mutable vector where index i
stores
the index (cup number, essentially) of the cup labeled i
(technically,
i+1
). We can use Finite n
(Finite 1000000
in our case) for our index
size because it is constrained to be between 0 and 999999, and subtracting past
0 wraps back up to 999999 like we'd want it to.
step
:: forall n m s. (KnownNat n, PrimMonad m, PrimState m ~ s)
=> CrabState n s
-> Finite n -- ^ current pointer
-> m (Finite n) -- ^ next pointer
step cs lab = do
-- pull out the next three cups, and the cup fourth to the right
(gs@[g1,_,g3],lab') <- pull3 lab
-- update the "cup-to-the-right" of the pointer cup
MV.write cs lab lab'
-- find the first valid "preceding cup"
let target = until (`notElem` gs) (subtract 1) (lab - 1)
-- what cup is to the right of the target cup?
aftertarg <- MV.read cs target
-- pointer shuffling: the target cup should point to the pulled cups
MV.write cs target g1
-- .. and the final pulled cup should point to where the target cup pointed to originally
MV.write cs g3 aftertarg
pure lab'
where
pull3 :: Finite n -> m ([Finite n], Finite n)
pull3 i0 = do
i1 <- MV.read cs i0
i2 <- MV.read cs i1
i3 <- MV.read cs i2
i4 <- MV.read cs i3
pure ([i1,i2,i3],i4)
Now we just need to initialize from a fully allocated vector by writing at each index the value of the previous cell:
initialize
:: forall n m s. (KnownNat n, PrimMonad m, PrimState m ~ s)
=> V.Vector n (Finite n) -- ^ vector, organized left-to-right
-> m (Finite n, CrabState n s) -- ^ initial pointer
initialize v0 = do
cs <- MV.new
for_ finites $ \i -> -- iterate over each index
MV.write cs (v0 V.! (i - 1)) (v0 V.! i)
let i0 = v0 `V.index` 0
pure (i0, cs)
And now a function to mutate our crab state a given number of points, from an initial pointer index:
run :: (KnownNat n, PrimMonad m, PrimState m ~ s)
=> Int -- ^ number of steps
-> Finite n -- ^ initial index
-> CrabState n s
-> m ()
run n i0 cs = go 0 i0
where
go m i
| m == n = pure ()
| otherwise = go (m + 1) =<< step cs i
And maybe some functions to read out the actual answers:
numbersFrom1
:: Int -- ^ how many numbers to pull
-> CrabState n s
-> m [Finite n]
numbersFrom1 n cs = go 0 0
where
go m i
| m == n = pure []
| otherwise = do
nxt <- MV.read cs i
(nxt:) <$> go (m+1) nxt
And we have our full pipeline, remembering that we have to subtract 1 to get the index of a cup from the cup number:
part1 :: [Int] -> [Int]
part1 cs0 = runST $ do
cs <- initialize v0
run 100 0 cs
(+ 1) . fromIntegral <$> numbersFrom1 9 cs
where
v0 :: V.Vector 10 (Finite 10)
Just v0 = V.fromList $
fromIntegral . subtract 1 <$> cs0
part2 :: [Int] -> Int
part2 cs0 = runST $ do
cs <- initialize v0
run 10000000 0 cs
[x,y] <- (+ 1) . fromIntegral <$> numbersFrom1 2 cs
pure (x * y)
where
v0 :: V.Vector 1000000 (Finite 1000000)
Just v0 = V.fromList $
(fromIntegral . subtract 1 <$> cs0)
++ [9..]
Overall, a very fun puzzle that required a bunch of interesting data structure and representation breakthroughs to tackle :)
>> Day 23a
benchmarking...
time 4.469 μs (4.420 μs .. 4.544 μs)
0.997 R² (0.993 R² .. 1.000 R²)
mean 4.452 μs (4.424 μs .. 4.542 μs)
std dev 181.5 ns (39.87 ns .. 343.3 ns)
variance introduced by outliers: 53% (severely inflated)
* parsing and formatting times excluded
>> Day 23b
benchmarking...
time 194.3 ms (190.4 ms .. 196.6 ms)
1.000 R² (0.999 R² .. 1.000 R²)
mean 195.4 ms (194.3 ms .. 198.1 ms)
std dev 2.172 ms (125.3 μs .. 3.023 ms)
variance introduced by outliers: 14% (moderately inflated)
* parsing and formatting times excluded
Prompt / Code / Rendered / Standalone Reflection Page
Day 24 brings us our third cellular automata puzzle of the year! :D The other ones were Day 11 and Day 17. In fact, I was able to mostly copy and paste my stepper code for Day 17 :)
The main twist here is that we'd have to use hexy stepping and hexy neighbors. My initial solve used the grid library to get the hexy steps neighbors, but I did go back and implement the tiling myself because it wasn't too bad :)
For part 1, it can be nice to have some intermediate data types
data HexDirection = West
| Northwest
| Northeast
| East
| Southeast
| Southwest
toDirs :: String -> Maybe [HexDirection]
toDirs = \case
[] -> Just []
'w':ds -> (West:) <$> toDirs ds
'e':ds -> (East:) <$> toDirs ds
'n':'e':ds -> (Northeast:) <$> toDirs ds
'n':'w':ds -> (Northwest:) <$> toDirs ds
's':'e':ds -> (Southeast:) <$> toDirs ds
's':'w':ds -> (Southwest:) <$> toDirs ds
_ -> Nothing
hexOffset :: HexDirection -> Point
hexOffset = \case
West -> V2 (-1) 0
Northwest -> V2 (-1) 1
Northeast -> V2 0 1
East -> V2 1 0
Southeast -> V2 1 (-1)
Southwest -> V2 0 (-1)
So we can parse into a list of [HexDirection]
paths, and then we can get our
starting points by xoring all of the final points:
import Data.Bits
initialize :: [[HexDirection]] -> Set Point
initialize = M.keysSet . M.filter id . M.fromListWith xor
. map (\steps -> (sum (map hexOffset steps), True))
And this gives us the set of all active points, which we can use to answer part one. But now, on to the simulation!
First, we can expand the neighbors of a given point in our hexy coords:
neighbors :: Point -> Set Point
neighbors (V2 x y) = S.fromDistinctAscList
[ V2 (x-1) y
, V2 (x-1) (y+1)
, V2 x (y-1)
, V2 x (y+1)
, V2 (x+1) (y-1)
, V2 (x+1) y
]
And our step function looks more or less the same as day 17:
step :: Set Point -> Set Point
step ps = stayAlive <> comeAlive
where
neighborCounts :: Map Point Int
neighborCounts = M.unionsWith (+)
[ M.fromSet (const 1) (neighbors p)
| p <- S.toList ps
]
stayAlive = M.keysSet . M.filter (\n -> n == 1 || n == 2) $
neighborCounts `M.restrictKeys` ps
comeAlive = M.keysSet . M.filter (== 2) $
neighborCounts `M.withoutKeys` ps
First we collect a Map Point Int
of each point to how many live neighbors it
has. Then the live points (neighborCounts `M.restrictKeys` ps
) are
filtered for only the ones with 1 or 2 live neighbors, and the dead points
(neighborCounts `M.withoutKeys` ps
) are filtered for only the ones with 2
live neighbors. And the resulting new set of live points is stayAlive <> comeAlive
.
part1 :: [[HexDirection]] -> Int
part1 = S.size . initialize
part2 :: [[HexDirection]] -> Int
part2 paths = S.size (iterate step pts !!! 100)
where
pts = initialize paths
>> Day 24a
benchmarking...
time 2.597 ms (2.551 ms .. 2.639 ms)
0.996 R² (0.993 R² .. 0.998 R²)
mean 2.579 ms (2.545 ms .. 2.614 ms)
std dev 111.4 μs (82.30 μs .. 141.5 μs)
variance introduced by outliers: 28% (moderately inflated)
>> Day 24b
benchmarking...
time 272.1 ms (247.2 ms .. 296.5 ms)
0.996 R² (0.996 R² .. 1.000 R²)
mean 273.7 ms (264.8 ms .. 286.5 ms)
std dev 13.87 ms (1.266 ms .. 18.02 ms)
variance introduced by outliers: 16% (moderately inflated)
Prompt / Code / Rendered / Standalone Reflection Page
Merry Christmas everyone, it's December 25th :D
The Christmas Problem is usually supposed to be a quick and concise one, since Eric wants people to spend the holiday with their family. This one is a bit obscured in the jargon, but once you sort through it, the solution ends up being pretty tidy :)
In the end you are exponentiating the number 7 by a given number of times (the
loop count) to get the number you see. So you're solving 7^x = <your number>
...so that's basically a logarithm!
The arithmoi library (which I previously used in problems like Day 13) offers a nice discrete logarithm function, so that's really all we need to use:
type Magic = 20201227
magicGroup :: CyclicGroup Integer Magic
Just magicGroup = cyclicGroup
primBase :: PrimitiveRoot Magic
Just primBase = isPrimitiveRoot magicGroup 7
findSecret :: Mod Magic -> Maybe Natural
findSecret = fmap (discreteLogarithm magicGroup primBase)
. isMultElement
And so our final solution is just (after converting the input numbers to the
Mod Magic
data type)...
day25 :: Mod Magic -> Mod Magic -> Maybe Integer
day52 x y = do
secret <- findSecret x
pure . getVal $ y ^% secret -- exponentiate by the loop count
Merry Christmas to everyone, and happy New Years too. Thank you for reading these reflections, and I hope they have been helpful in some way :) Special thanks to Eric Wastl too for such a great event as always. Until next year!
>> Day 25a
benchmarking...
time 1.997 ms (1.971 ms .. 2.023 ms)
0.998 R² (0.997 R² .. 0.999 R²)
mean 2.042 ms (2.019 ms .. 2.066 ms)
std dev 73.99 μs (63.56 μs .. 100.2 μs)
variance introduced by outliers: 22% (moderately inflated)
* parsing and formatting times excluded