2018
Day 1: Chronal Calibration
Part 1
Sum all frequency changes to find final frequency. While this could be done with a simple loop, using sum() with a generator expression is more elegant.
# Read input and sum changes
with open("input.txt") as f:
data = f.readlines()
total = sum(int(instruction) for instruction in data)
print(total)
Part 2
Find first frequency that appears twice when repeatedly applying changes. Need to track seen frequencies and cycle through input until a repeat is found.
def find_repeat(data):
frequency = 0
seen = {0} # Using set for O(1) lookups
# Keep cycling until repeat found
while True:
for instruction in data:
frequency += int(instruction)
if frequency in seen:
return frequency
seen.add(frequency)
# Read input and find first repeat
with open("input.txt") as f:
data = f.readlines()
print(find_repeat(data))
Day 2: Inventory Management System
Part 1
Check box IDs for letters appearing exactly 2 or 3 times, multiply those counts. Using Counter simplifies letter frequency analysis.
from collections import Counter
def get_checksum(box_ids):
twos = threes = 0
for id in box_ids:
# Count letter frequencies
counts = set(Counter(id.strip()).values())
# Update counts using boolean arithmetic
twos += 2 in counts
threes += 3 in counts
return twos * threes
# Read input and calculate checksum
with open("input.txt") as f:
data = f.readlines()
print(get_checksum(data))
Part 2
Find two box IDs that differ by exactly one character. Using itertools.combinations makes pair comparison elegant.
from itertools import combinations
def find_similar_ids(box_ids):
# Compare all pairs of IDs
for id1, id2 in combinations(box_ids, 2):
# Count positions where characters differ
differences = sum(a != b for a, b in zip(id1, id2))
if differences == 1:
# Found match - return common characters
return ''.join(a for a, b in zip(id1, id2) if a == b)
# Read input and find similar IDs
with open("input.txt") as f:
data = f.readlines()
print(find_similar_ids(data))
Key concepts:
- Using
collections.Counter
for character counting - Set operations for tracking seen values
- Generator expressions for memory efficiency
itertools.combinations
for pair generationzip
for parallel iteration
The puzzle introduces:
- Frequency tracking/analysis
- Cyclic iteration
- String comparison
- Pair matching
Common patterns:
- File reading with context managers
- Set-based lookups
- Boolean arithmetic
- Character-by-character comparison
- Efficient pair generation
Day 3: No Matter How You Slice It
Part 1
Track overlapping fabric claims in a grid. Each claim has an ID and specifies a rectangle by distance from left edge, distance from top edge, width, and height.
def count_overlaps(claims):
# Use dictionary to track square usage
fabric = {}
for claim in claims:
# Parse claim coordinates and size
parts = claim.split()
coords, size = parts[2].strip(':'), parts[3]
x, y = map(int, coords.split(','))
width, height = map(int, size.split('x'))
# Mark all squares used by this claim
for i in range(x, x + width):
for j in range(y, y + height):
pos = f"{i}x{j}"
fabric[pos] = fabric.get(pos, 0) + 1
# Count squares with multiple claims
return sum(1 for count in fabric.values() if count > 1)
Part 2
Find the only claim that doesn't overlap with any other claim.
def find_non_overlapping(claims):
fabric = {}
claim_ids = set()
# First pass: Mark all claimed squares
for claim in claims:
parts = claim.split()
claim_id = parts[0]
coords, size = parts[2].strip(':'), parts[3]
x, y = map(int, coords.split(','))
width, height = map(int, size.split('x'))
claim_ids.add(claim_id)
for i in range(x, x + width):
for j in range(y, y + height):
pos = f"{i}x{j}"
if pos in fabric:
fabric[pos].append(claim_id)
else:
fabric[pos] = [claim_id]
# Find claims with overlaps
overlapping = set()
for claims in fabric.values():
if len(claims) > 1:
overlapping.update(claims)
# Return claim with no overlaps
return (claim_ids - overlapping).pop()
Key concepts:
- Using dictionary for sparse grid representation
- Coordinate tracking with string keys
- Set operations for finding unique claims
- Multiple passes through data for different operations
The puzzle introduces:
- 2D grid problems
- Claim/area overlap detection
- Parsing structured input
- Memory-efficient grid representation
Day 4: Repose Record
Part 1
Parse guard sleep records to find the guard that sleeps the most and their most common sleep minute.
def process_records(records):
# Sort records chronologically
records.sort()
# Track guard sleep patterns
guard_schedules = {}
current_guard = None
sleep_start = 0
for record in records:
minute = int(record[15:17])
if "Guard" in record:
current_guard = int(record.split("#")[1].split()[0])
if current_guard not in guard_schedules:
guard_schedules[current_guard] = [0] * 60
elif "falls asleep" in record:
sleep_start = minute
elif "wakes up" in record:
# Mark minutes as asleep
for i in range(sleep_start, minute):
guard_schedules[current_guard][i] += 1
return guard_schedules
def find_sleepiest_guard(guard_schedules):
# Find guard who sleeps the most
max_sleep = 0
sleepiest_guard = None
for guard_id, schedule in guard_schedules.items():
total_sleep = sum(schedule)
if total_sleep > max_sleep:
max_sleep = total_sleep
sleepiest_guard = guard_id
# Find their most frequent sleep minute
schedule = guard_schedules[sleepiest_guard]
sleepiest_minute = schedule.index(max(schedule))
return sleepiest_guard * sleepiest_minute
Part 2
Find the guard most frequently asleep on the same minute.
def find_most_frequent_minute(guard_schedules):
max_freq = 0
result = 0
for guard_id, schedule in guard_schedules.items():
minute_freq = max(schedule)
if minute_freq > max_freq:
max_freq = minute_freq
sleepiest_minute = schedule.index(minute_freq)
result = guard_id * sleepiest_minute
return result
Key concepts:
- Chronological sorting of records
- Array-based minute tracking
- Frequency analysis
- Multiple analysis approaches on same data
The puzzle introduces:
- Time-based record processing
- Pattern finding in schedules
- Different statistical approaches
- Data structure choice for analysis
Day 5: Alchemical Reduction
Part 1
Remove pairs of adjacent units that are the same letter but different case until no more reactions are possible.
def remove_at(i, s):
# Remove pair of characters at position i
return s[:i] + s[i+2:]
# Keep processing until no changes occur
while lastinput != input:
lastinput = input
for i in range(0, len(input)-1):
# Check for uppercase/lowercase pair of same letter
if ((input[i].isupper() and input[i+1].islower()) or
(input[i].islower() and input[i+1].isupper())):
if input[i].lower() == input[i+1].lower():
input = remove_at(i, input)
break # Start over after removal
Part 2
Remove each unit type (both cases) one at a time and find shortest possible polymer after reduction.
# Try removing each letter of the alphabet
alphabet = "abcdefghijklmnopqrstuvwxyz"
results = []
for letter in alphabet:
# Remove both cases of current letter
testcase = input.replace(letter, '').replace(letter.upper(), '')
# Reduce polymer until stable
lasttest = ""
while lasttest != testcase:
lasttest = testcase
for i in range(0, len(testcase)-1):
# Check for reacting pairs
if ((testcase[i].isupper() and testcase[i+1].islower()) or
(testcase[i].islower() and testcase[i+1].isupper())):
if testcase[i].lower() == testcase[i+1].lower():
testcase = remove_at(i, testcase)
break
results.append({letter: len(testcase)})
Key concepts:
- String manipulation using slicing
- Character case comparison
- Iteration until stable state
- Pattern matching and replacement
One elegant alternative could use a stack for better performance with long polymers:
def reduce_polymer(polymer):
stack = []
for c in polymer:
# If character matches and cancels last character in stack
if stack and c.swapcase() == stack[-1]:
stack.pop()
else:
stack.append(c)
return len(stack)