## Genetic Algorithm [Actually, Simulated Annealing]

[After posting this, a href=”https://arundquist.wordpress.com/”>Andy Rundquist informed me that I actually did simulated annealing—not a genetic algorithm. I called it a genetic algorithm initially because (1) I thought it was and (2) I had never heard of simulated annealing. I learned something new! See his comment below for more details, including a potential flaw with my program that I didn’t actually run across)]

Inspired by Andy Rundquist’s post from today, I wrote a script in Python (actually, I was inspired by his post from 2012; it is just a happy coincidence that he posted again on this topic today).

Problem: My department is moving to a new building, and we need to assign new offices. We want to assign the offices in a reasonable way that makes people as happy as possible.

Solution: I wrote a genetic algorithm in Python to decide this. Here is a lightly-edited email I sent the department. Note that I had the faculty rank the departments (by tier, so there could be three first choices, two second choices, etc), and I used this as an input. Here is the email:

Here was the process: I wrote a computer program that used a genetic algorithm (the assignments above, save for Brian’s line, were copied and pasted from the computer output). The code is below, but the steps are here:

1. Make an initial office selection. I went alphabetically: (Benesh in
230, Chesterfield in 231, Dalembert in 232, …, Zbinski in 242).

2. Measure the “fitness” of the assignment. I did this by adding up each
person’s ranking for the office they were assigned PLUS the maximum rank
that someone had (to minimize pain). Like golf, a lower score here is
better.

3. Randomly swap the people in two offices.

4. Measure the fitness of this new assignment. If it is better than the
old one, keep it. Otherwise, stay with the old assignment.

5. Repeat 10,000,000 times. Literally.

The assignment above is what came out. I am confident that there isn’t a better option—I tried to improve the fitness by hand, but there aren’t any options on how to improve it (there were only a small number of options: only two people did not have a Tier 1 office, and they would need to be replaced by someone who preferred that office).

Finally, I was mostly able to honor people’s requests from the
comments, although this was not perfect.

The Python code is below. I removed people’s preferences to protect privacy, but the code looks something like:

bb=[1,2,3,4,5,6,7,8,9,10,11,12,13,”Bret”],

where this would indicate that Main 230 was Tier 1, Main 231 was Tier 2, …, and Main 242 was Tier 13 (these were not my actual rankings). The variable name “bb” is my initials.

Have a great day!
Bret

import random

bb=[REDACTED]
pc=[REDACTED]
rd=[REDACTED]
se=[REDACTED]
jg=[REDACTED]
dh=[REDACTED]
rh=[REDACTED]
kn=[REDACTED]
tp=[REDACTED]
ts=[REDACTED]
an=[REDACTED]
mz=[REDACTED]

seedAssignments=[bb,pb,rc,sc,jg,dh,rh,kn,tp,ts,an,mt]

def fitness(oa):
rankingList=range(0,len(oa))
for i in range(0,len(oa)):
rankingList[i]=oa[i][i]
return sum(rankingList)+max(rankingList)

def morefit(a,b):
if fitness(a)>fitness(b):
return b
return a

def mutate(oa):
indices=range(0,len(oa))
swaplist=random.sample(indices,2)
newoa=list(oa)
tempvalue0=newoa[swaplist[0]]
tempvalue1=newoa[swaplist[1]]
newoa[swaplist[0]]=tempvalue1
newoa[swaplist[1]]=tempvalue0
return newoa

def iterateAssignments(n):
oa=seedAssignments
for i in range(0,n):
newoa=list(mutate(oa))
oa=list(morefit(oa,newoa))
print “Fitness:”+str(fitness(oa))
for j in range(0,len(oa)):
if j<10: print "Main
23"+str(j)+": "+oa[j][len(oa)]+" "+str(oa[j][j])
else:
print "Main 24"+str(j-9)+": "+oa[j][len(oa)]+" "+str(oa[j][j])

### 4 Responses to “Genetic Algorithm [Actually, Simulated Annealing]”

1. catatori Says:

Nice work, Bret. 😉

2. Andy Rundquist Says:

Awesome work, Bret! Looking at your code, it looks closer to Simulated Annealing than a population-based GA. In other words, I looks like you only ever have a single candidate that’s being constantly mutated and evaluated. Is that correct? If so I wonder if a population-based one would take less total fitness calculations. Also, in simulated annealing you should occasionally intentionally choose the worse fitness after a mutation, just to avoid local minima. Do you think that would change much? Sorry if I misunderstood your code.

• bretbenesh Says:

You are using words like “annealing,” so I am guessing that you understand my code completely.

Re-running the code with different seeds, I see that I do get local minima. However, I ended up with a global min by checking the output’s code against a common sense check of the data (the results were really good, and I only had a small number cases to check).

Thanks for teaching me something new!