One way of getting a better protein is by directed evolution (DE): start with the ‘natural’ version and do successive rounds of mutation, selecting the ones that have the best effect each time. This can take a lot of work and it can be difficult to optimize for multiple properties at once. This study introduces an active learning framework that uses protein language models (PLMs) and activity predictors to improve the DE process, sometimes needing only 4 rounds of DE to get 2-500x improvements over the native protein. They find that PLMs are essential to their process, and have ensured modularity so that as PLMs get better they can be plugged into their algorithm. This represents a significant advance in protein engineering, improving the creation of optimized proteins for a variety of uses.